Thoughts from My Three-Night Coding Excursion: Part 2 – Design in C#
For part 1 of this series, please click here.
Let’s get on to understanding the GDI+ library to leverage the use of C# and build any kind of visual unit you want. The moment you use GDI+, the confines of a form breaks down, at the very least visually. You can draw any shape you want, any colour you want, use any picture you made or like, and engineer your app into something from your dreams.
This is not a C# primer so I’ll get down to the uses of some of the more frequent functions that I use from the API.
The following namespaces have to be included:
- using System.Drawing;
- using System.Drawing.Drawing2D;
What you do is one of the two methods in order to draw something on the screen. You can override the OnPaint() method and write graphics code there OR you can write all the graphics code in the Paint event. The OnPaint() method invokes the Paint event. The paint loop is literally an infinite event based loop which handles the graphics refreshing for the corresponding window. The OS handles the window refreshing using callbacks.
Write this event handler after the InitializeComponent() method in the form constructor:
this.Paint += new
PaintEventHandler(Form1_Paint);
The first thing I normally do is enable the double buffering for any specific form/user control from the properties view. Caveat: A few books, even good ones, tend to use the panel control for their examples. This is not really the best control for examples as such because of the lack of double buffering in its properties. A workaround needs to be done, starting with inheriting from the parent controls, then getting the handle to the Device Context, etc. So if the flickering starts, which is inevitable due to this limitation, don’t say I did not warn you, especially after 1000 lines are poured into it already.
It’s far simpler just to use a UserControl.cs class from the new items menu. This class implicitly supports double buffering as one of its properties.
What double buffering prevents is flickering from too much refreshing activity done by Windows. When a portion of screen is redrawn, the process is actually quite linear, this involves some amount of latency and refreshing rate to the human eye, sync related issues. A buffer that builds the bitmap image and then uses that image for the final picture makes the transition flicker free.
The second thing I do is enable Anti-Aliasing to prevent the shapes drawn to get aliased. Aliasing is due to the nature of digital graphics where every picture element represents only a sample of the image colour value at that location, it’s not really continuous. So that lost detail comes up in curves and circles as jagged lines. If you try to make a circle using Lego square blocks you will get a circle but you will also see the rectangular edges sticking out. That’s pretty much like aliasing. So what anti-aliasing does it recolor the surrounding pixels to slowly blurring and fading shades of the border colours. The final effect is an illusion of a less jagged line.
Just add this line in the paint loop:
e.Graphics.SmoothingMode = SmoothingMode.AntiAlias;
This could be performance intensive as the algorithms required to do this are quite complex. So if the graphics are involved then you could try improving the rendering logic so that the performance may increase.
After the above, assuming I want to keep the form a rectangle, the drawing functions I use most are –
DrawArc(),DrawEllipse(),DrawRectangle(),DrawString(),DrawCurve(), DrawLine(s)(),DrawPath(s)() and their Fill<Arc,Ellipse,Rectangle>() variations.
Think about it, these are pretty much all you require to come up with any sort of shape. The rest are just convenience based function overloads, taking a few other parameters for a very specific use. These are essentially the graphic primitives that can be used toward drawing your next masterpiece.
Now, you need some sort of abstraction to come up with a virtual pen, brush, a colour palate; and a text font (think classical calligraphy) to emulate what we do as humans in real life and translate it to code. So indeed these have been abstracted in the form of API’s that take in a set of parameters or provide static constants for a regular set of values that enable us to think in real world terms.
So, in your own drawing class prerequisites as a child, you must have asked your parents for your long list of drawing stationary. That might have included a brush set, with different thicknesses, one for light strokes the other for broad strokes, a set of colour pencils, different grades of pencil tones, a colour box containing the most used colours, or an oil paint set or acrylic colour mix, stencils for text tracing and so on. Similarly, the Pen, Brush, and Font classes do the same.
e.Graphics.DrawArc(new
Pen (new
SolidBrush(Color.White)),new
Rectangle(new
Point(0,150),
new
Size(600,400)),90,-180);
Take a look at the line above that draws an arc. The new Pen() instantiates the Pen class, and the constructor takes the parameter Colour. new SolidBrush() is the parameter passed which instantiates a SolidBrush object. Color.White is passed into the instance constructor. Color is a struct and White is the colour chosen from this struct. There are other kinds of brushes as well, like the LinearGradientBrush and the HatchBrush classes that simulate colour gradients and hatch styled patterns often used in engineering and architectural manual drawings, with the technique lifted in code.
Font class instances provide text based configuration for use in graphics:
Font f = new
Font(“Arial”, 17);
Wherein a new instance is created and the Font Family is passed as a string with the font size to the constructor.
A good method to determine the strings size for calibration during string placements and zooming is:
e.Graphics.MeasureString(“Notes”, f))
The MeasureString() function, which takes the string and the font instance assigned to it.
This gives us SizeF structure as a floating point set of numbers that denote the width and height of the text string on screen in code.
The Point struct is no doubt very useful for both graphics calibration during runtime and finding the value of the mouse pointer once the requisite mouse events are processed. Also building shapes in runtime would require the collection of graphics data, and point is one of the essential ones. To draw a custom shape a set of anchor points can be provided and the lines connections are taken care of by the function. The Point structure provides two integer or float numbers (PointF) for the X and Y axes.
So at the end of the day, you have a set of APIs that enables you to use a few lines of code, automate the process, put some logic into it and let it draw everything in the best way rendered using display technology, while being abstracted to be used as simply as possible.
Further a set of graphic mechanisms called TRANSFORMS is very useful as well. If the origin of a graphic needs to be changed, you could use TranslateTransform(). If you need to blowup or zoom out of a graphic, you use the ScaleTransform(). Any kind of rotation would use the RotateTransform() method.
TranslateTransform takes the offset from the current origins as the new origins, a new (x,y) pair.
ScaleTransform takes factors of multiplication for the x and y coordinates.
RotateTransform takes the angle required to rotate to.
The Timer class is essential to many other activities that might require changing screen modes or activating a specific mode or any sort of animation. The interval property in milliseconds and the Tick event are all there is to setting up a timer. Then it’s just start() and stop() methods.
Finally the Invalidate() method is very useful for the screen refresh after any specific event being handled or you want the screen to update the new data. It also takes a bool parameter as TRUE/FALSE for the children controls to be invalidated as well or not.
These essential methods and data structures are all you need for most of your graphics logic coding. The rest is how you handle the various events, good use of flags to switch between modes, and a good understanding of essential geometry to figure out the maths to draw the constructs on screen, or animate them in a specific manner.
The funny thing is, much of Windows programming is all about using API’s. If you look at seven year old assembly code that uses a Windows graphics library like GDI, compare that with the C code that uses GDI/GDI+, and finally compare that with the C# code that uses GDI+, the differences are minimal. The concepts are entirely the same, even the API methods used. The languages are generation evolved and pretty much the only differing factor, beyond the obvious ones. C#, being the latest, has successfully built wrappers around essentially unsafe code towards the .NET paradigm without sacrificing any of the usability. DirectX has become obsolete even after much porting and wrapping being done. But GDI+ is here to stay.
I highly recommend that you learn any graphics application like Photoshop so that much of the prototyping and background graphics for the controls can be done in the backend and that the graphics code and interactivity can be used as very good extensions to it.
Then, it’s the keyboard and mouse handling that has to be dealt with to maximise interactivity. I normally disable any sort of factory made form styling that comes with Windows. I build the prototype visuals in Photoshop and then use a simple form and then set the background image to it. Repeat ad infinitum for all the rest of the controls. So then how do you handle moving the Client area of your application within the confines of the screen? The solution is to use three mouse events which are – MouseDown, MouseMove, and MouseUp. These events are fired when the application message queue receives these particular translations of your clicking activity through windows message pump. To simulate a click drag operation to relocate your application to another co-ordinate, do the following:
Set up these event handlers. There are two ways to do it. Use the properties events view or write event handling code after the InitializeComponent() method in the form constructor.
this.MouseHover += new
EventHandler(Form1_MouseHover);
this.MouseMove += new
MouseEventHandler(Form1_MouseMove);
this.MouseWheel += new
MouseEventHandler(Form1_MouseWheel);
this.MouseDown += new
MouseEventHandler(Form1_MouseDown);
this.DoubleClick += new
EventHandler(Form1_DoubleClick);
this.Paint += new
PaintEventHandler(Form1_Paint);
The idea is to capture the current point co-ordinates, save them, and calculate the offset to the new location which the mouse points to on the click drag initiated by the user. Finally, add the offset differences to the original location to set the new one. It’s actually very simple and makes the Windows max, min, close trinity appear (or disappear) trivial to implement. I normally don’t use max and min, as I prefer to make the environment very streamlined to nearly eliminate excessive tool usage. This immediately brings leaps and bounds in productivity.
After working in AV firms I noticed how my colleagues used so many tools to get a single result and I could see the severe ergonomic issues they were facing just to get a hash value. As I said it’s masochistic. This is a pan domain, and in fact from my excursions in music studios, many of my friends who use music equipment to produce music have the same problem. It’s always best to get the most out of a single tool or two maybe than fill a house full of stuff that gets less that 1% of proper use to get 10% of the work done in triple the time. Results do the talking, and workflow ergonomics is my favourite coffee table conversation starter (not with the fairer sex).
Point p; bool clicked = false;
void Form1_MouseDown(object sender, MouseEventArgs e)
{
if (e.Button== MouseButtons.Left){
p = new
Point(e.X,e.Y);
clicked = true;
}
}
void Form1_MouseMove(object sender, MouseEventArgs e)
{
if (e.Button == MouseButtons.Left && clicked == true) {
this.Left+=e.X-p.X;
this.Top += e.Y – p.Y;
}
}
private
void Form1_MouseUp(object sender, MouseEventArgs e)
{
clicked = false;
}
The above lines do the job. You mainly use a Point struct instance as a repository and a flag to signal events and propagate the offset values for the final calculation.
One more usability decision I use is automatic focussing on the control/form that the mouse is currently hovering on, this eliminates the use of multiple alt- tabbing to get to the destination window. Ideally if the toolkit is small and well integrated the power of not alt-tabbing is immediately evident. So, to skip from window to window and work already, just point to the relevant view and start banging away.
void Form1_MouseHover(object sender, EventArgs e)
{
this.Focus();
}
Keyboard events are handled in a similar manner using the events KeyUp, KeyDown and KeyPress.
A very good use of custom views I learnt was from the music software Logic Audio from Emagic GmBH, now Apple Logic Pro. This awesome software masterpiece (any electronic musician worth his salt knows this software’s legacy) used a navigation mechanism called screenshots, where in the user assigns a pre-arranged set of the software elements provided for a specific view. So all of this can be customised by the user, say one for recording, other for sampling and keyboard mapping among other workflow views and finally assigned to a number from 1-99 along with a keyboard modifier CTRL. This is a very efficient way to work with something as complex as music.