Thoughts from My Three-Night Coding Excursion: Part 1 – Design Principles
I will discuss a few topics that motivate me to make software the way I like it. Currently I am writing an Android analysis tool and since I am using C# for the same, I do have a few things to share as notes from my daily excursions in tool design and coding my own toolkit to help in my reversing tasks. The considered agenda for the same are –
- UI and ergonomic design
-
Leveraging C# for your daily reverse engineering.
The main datasets come from the AndroidManifest.xml file and the APIs used from the Android SDK in the dex file. Once the essential information gathering is done, the datasets are ranked, weighed, and sorted, and then mapped with existing research datasets to give a prediction of the most likely used API’s in the APK file. This is beneficial as it tends to give a fairly confident profile of the potential behaviour that could be displayed by the APK sample in question. Thus we get an individual list of extracted per sample views of the analysis parameters and finally a batch wise or complete dataset wise permission and API frequencies.
This histogram then gives us the trends in a particular sample set. The larger the sample set the better, for obvious statistical reasons. Further, internally a score is mapped for each APK file which is then used for the final suspiciousness index, and if a threshold is crossed, it would signify a malicious numeric. Much of the reporting framework has been done in a minimal manner to function more like an interactive dashboard.
So this article will describe the concepts and methodologies I have used for the tool design and the coding approach I have taken using C#. So this is more of an expose of the approach rather than the implementation, for that the source code is available. The current code base has around 5000 LOC. Without further ado, let’s proceed.
GOOEY/FOOEY: Design Principles
When you deconstruct the acronym GUI, you get the essential parameters needed to represent a metaphor within the constraints of the computer visual layers. The layers incorporate both the hardware features and the software translation mechanisms. You have to instruct the system how to communicate between the layers and make it both feasible for computation and useful for the human computer interaction. Current abstractions in graphics and interactive interfaces enable you to think and code in more natural terms. Among the countless visual libraries, I am focussing on the Windows platforms’ GDI+ and DirectX APIs to illustrate a few concepts that may just be useful in your next software tool.
It’s conventional wisdom of sorts in the development community that programmers make bad designers, which might just be statistically true due to various reasons, though I can pretty much trace it to the bane of all designing paradigms – the commandline. This whole way of working hails from an era when the transition to blinking lights from punch cards was a big step in computing, and little further than that. Much great work has been done using this interface, but then I guess the masterpieces in the Louvre don’t have a commandline. The visual arts have been with us from the dawn of mankind and as much as programmers hate it (not all), it’s here to stay. In fact we can inculcate so much from the existing legacy of visual splendour and imbibe much of their design principles in our own software, using Photoshop to draw a moustache on Mona Lisa not included.
Text is not the most natural way to work, either from an evolutionary standpoint or the current implications of using visual symbols that communicate so much information along with something not so intuitive like sound. In fact many programmer geek types are either autistic or dyslexic, so I guess that explains the cryptic text in most terminals’ history logs and pretty much defeats the use of text. Text and language came along after the eyes and the imaging circuits in our brain have been training apriori for literally thousands of years. This immediate evolutionary advantage is to be availed of. So how can we do this?
Let’s study some of the more pertinent parameters of how we perceive visual objects and infer meaning from it. Let us start with the ‘LINE’. How many properties can you think of that you might use it for?
- width (thickness)
- length
- pressure (ink density, etching depth)
- stroke type (dotted, dashed, smooth)
These are some of the important ones.
Next up, the ‘CIRCLE (or a special form of an ellipse)’:
- Radius
- Circumference
- Bounding rectangle
And so on and so forth. Try to list out the properties from other basic shapes like a rectangle, square, triangle, etc. These are the primitives that can be used to describe pretty much any shape, the triangle being one of particular use especially in 3D graphics. It was ancient geometry where it was discovered that a sphere can be represented or approximated using triangles as a primitive shape.
Moving on to curves – simple ones like the arc which is a part of a circle and bezier curves and splines are the more common ones in graphics and design.
Another important fundamental concept is the co-ordinate system and there are quite a few of them. Most of them are inter – translated using maths. A co-ordinate system essentially is a way to represent variables, and much of it is done visually even though the computations are represented using matrix maths. The essential ones are the Cartesian coordinate system and the Polar coordinate system. In the first, 2 or 3 mutually orthogonal planes are used as axes, wherein the variables are assigned and their values mapped. You already know it as x-axis and y-axis for 2D graphics; add the third z-axis and you have 3D coordinate system set.
Remember much of this is entirely conceptual and it’s just our way of making sense of what we sense around us. Time being the 4th dimension also being a universal invariant is also included if animation does not seem too esoteric. In theoretical physics, dimensions have to be invented just to keep up the correctness of various equations, so we don’t know it all yet. Even then this compromise does seem useful enough to warrant full use in so many ways. Polar coordinate system is used to plot angular motion and variables in a 360 degree range. It’s a way to work with circular motion. In fact much of game programming uses these two axes mainly. Others are also contextually useful, like the cylindrical co-ordinate system which combines a polar and a Cartesian co-ordinate system.
With the brief theory above, let’s move to how it all translates to our screen. The computer display matrix is assigned to a 2D system. The width of your screen is the x-axis and the length is the y-axis. The y values are plotted from the top to bottom of the screen so the left most corner in the top has co-ordinates (0,0). For our programming purposes all co-ordinates in the various visual components have their own screen co-ordinates that have the same origins, though they are translated to different starting points with respect to their locations.
UI design principles do incorporate the very basics and a lot more. Design is an umbrella term that takes the various arts and sciences under its hood. Visual design, sound design, and architecture have so much in common that the only difference is the audience and the sense titillated. So much of western Baroque music has been inspired from architecture of its time from churches to royal mansions. The ornate shapes are directly transferred to complex contrapuntal lines that dictate much of this era’s sound.
Even today, the trends of mimicking or translating differing inspiration sources and finding a common theme are alive and kicking. Couple that with the combination of naturally occurring mathematics – the Golden Ratio and Fibonacci series, among others, and we begin to see a much intertwined existence between the disciplines. It almost seems like Dieu or God has given us the variety of options to service whatever senses we are most affiliated with.
UI design being more a science of compromises, it’s a larger proportion art than science, though arguably science is art that tries to believe that it’s a separate identity.
Let’s take into account how a multitude of shapes can convey information. We have the attributes:
- Proximity
- Depth
- Colour
- Shape
- Size
- Orientation
- Clustering
- Quantity
- Trends over a timeline
Proximity can convey the nearness or farness of a particular class of objects. This can be used to convey the value of information within a range.
Depth can be used to convey any other attribute attached to that particular object class. Like if a complex number is used to calibrate a particular value, the imaginary number ‘i’ can be represented in the z-axis.
Colour can convey the presence or absence of a particular piece of information.
Shape can convey the type of information.
Size can convey the saturation of that particular information or the threshold.
Orientation can be relative to the coordinate system or tied to a specific attribute from a reference standard. This could give the tendency score of a particular class of objects.
Clustering can convey the familiarity between the set of objects collected.
The total quantity of a class of objects is an immediate parameter of use.
Using the 4th dimension can give us the timeline of events as various shapes come in and out in an animated fashion along with changes in information as sizes and colours evolve. This gives us a good data mining set for further processing.
So you should already get the idea of how good use of a single HUD [Heads Up Display] can convey a lot of info with something we learnt in drawing class.
Placement
When you design your UI in Windows the first platform for experimentation is the ubiquitous Windows Forms. How will you place controls in the layout grid? How much information should be presented to the user? How many contextual views should be used? What colour combination and fonts should be used? If you design your own custom controls, do they jar with existing paradigms that users are familiar with? Are deeply nested menus the solution to all problems in life? These are some of the important questions you should ask while designing your next app.
If you read legacy books on the first Windows APIs for visual design in Windows, it’s pretty interesting to know how things have become more convenient but adapted to our more masochistic perversions, and thus in reality never actually changed.
Nowadays there is no need to elaborately fill data structures out and pass reams of header data just to display your form. It gives a good sense of nostalgia, but thank God, or rather Microsoft, for encapsulating much of the boring, by the numbers typist-tapestry. Saving our already crazy heads is not just the only improvement; rather it’s the carpal tunnel that’s taken care of (or is it?). Nowadays it’s just drag and drop. What do you drag and drop? The same old ‘menubar’, ‘toolbar’ and the fantastically useless ‘statusbar’ on the same old squarish box called the Form.
So think about control placement for the kind of interactivity and visual use of them. If you notice you are either right handed or left (most are right handed), if you are ambidextrous then this particular feature is irrelevant, unless you have a favourite hand, then it is. So if you notice ATM’s and piano keyboards, what’s in common? It caters to the majority by placing the controls that facilitate right handed people to go about their tasks. Left handed pianists don’t exist, and just like in cricket, they use the tool as usual and adapt to the scenario. So what happens if you place the essential tools at the top rather than a specific position geared for ergonomic efficiency? You get an average that is not ergonomic, but expected, so it eliminates the surprise factor. I guess you get the compromise part of this whole endeavour.
My proposed solution is to give the important options based on the handedness so what you get is vertical toolbar that enumerates options from top to bottom in the order of use or importance of data. This toolbar can be shifted as per the requirements of the user’s handedness. The use frequency of a particular item can alleviate situations of regular use. Ideally this should be placed near to the essential controls so that minimum movement is required to get the data out in a minimum number pixels travelled using a mouse (hey I just got a new measurement!). Ideally minimal use of the keyboard for navigation purposes is recommended.
Next, the number of tool items should ideally be an odd number as it’s been discovered that humans tend to remember odd numbers with greater accuracy.
SpacingNext, the mutual spacing between the different items can have a very beneficial effect when done well. Regular spacing is recommended to give a sense of balance, but too much space reduces tension and that reduces the usability factor of the design.
Taking parallels from music, western music specifically incorporates the use of a measurement concept that is multipurpose and sonically relevant in day to day use for any literate musician. It’s the basis of chord construction, counterpoint, choir arrangements etc. It’s the all-purpose INTERVAL. Much of music theory is just an intellectual dance around conventions that have a musical context that vary from contributing culture to culture, but this one is resilient. An interval is essentially the distance between two pitches or notes in musical jargon. It’s conventional musical wisdom that has accumulated over the centuries, and all such distances have been assigned a pleasing factor that is surprisingly binary in nature — Consonant intervals and dissonant intervals.
For those who are musically literate, you understand the different intervals like the perfect 5th or the minor 3rd. The sounds they describe are very unique to the 12 tone tempered scale, which is again a compromise on the sound structure to enable the instruments to play in the different keys without losing out on the musical effect. In a typical octave, the octave itself is the only whole integer ratio; the rest are weird fractions. It’s just the way our number system is aligned or misaligned to evaluate the natural ratios permeating our universe.
In the larger scheme of things, when constructing a musical passage, the tensions are just as important as the resolutions. That itself is the guiding mechanism where the listener expects a unifying theme that also surprises him and soothes him, takes him on a journey. Visually think of this as using a simple square as one figure and using a square with a rhombus intersecting it. That variety stirs things up to give and ebb and flow. If it does not flow, it’s a potential magnet for mosquitoes. You get the idea.
Keeping the important tool items closer and keeping other items spaced further immediately makes use of the above theory.
Font
Next, effective use of font and font properties are also very useful. Not all fonts are equal, or to be more specific – all fonts are equal, though some are more equal than others. Using Comic Sans on a tech presentation does not make sense, but neither does using Times New Roman with a small size. For regular apps, Perpetua, Helvetica, Courier, and Sans Serif fonts are good enough. If accessibility is not the priority just yet, set the font size to something that all users can use, not only the ones with an Apple display, typically I set it in the 10-13 range. Typography is a rich subject that encourages research on your own motivation.
Colour
Colour theory pretty much is all about the choice of colours within the medium constraints that give the best effect. The Kuler wheel at the Adobe website gives a good demonstration on how colour combos go about. The thing I learnt is to stick with grave colours, and use complimentary colour schemes that are easy on the eye.
This has to do more with how we have been seeing things since day one. Soil has a dark brown hue. The skies have a cool blue colour on a good day. Heavy nimbus-clouds have a grey offering that inspires poets. Trees have a soothing green shade. Much of nature’s hues are called ‘earthy colours’- duh. Black is not really a colour, more like the absence of it. In fact it’s really difficult to get black, so it’s more like using concentrations of brown to approximate black. A colour compromise.
We are not used to seeing fluorescent green every day for a long time, so we better not stick to it. Also our retinas are sensitive to long range wavelengths like red. So warning devices can incorporate red to signal a specific state; use red or nearby ranges like orange to warn the user of something happening.
What you see on screen is not always what you will get in print, so if colour is an important issue make sure it’s checked for range validity. Most graphic apps do it, by denoting safe colours. This is mainly due to subtractive mixing of colour pigment molecules and additive mixing of light particles. Read the physics on your own.
These things are taken for granted, but when used properly should give the intended increase in the usability factor.
User Interactivity
Getting to interactivity, single clicks and double clicks should be used to their max, without going beyond the one-click rule. It’s far better to toggle using any kind of input, like mouse or voice recognition or tablet and the like. An HUD view that’s context relevant and fast is really helpful, otherwise context specific views are welcome as well. Data should be transparent among the different views, this gives a synchronised effect.
Overview and zoom are also important factors for large datasets, each is an essential navigation device.
Scrolling can be avoided if the data is well formatted, or scrolling (especially horizontal scrolling) can be made more intuitive by using better interaction with the display itself. Editing modes can be used to switch between navigation and display interaction for data manipulation. These abstractions immediately help the user to be more aware of his current status in the application. Further the use of keyboard modifiers CTRL & SHIFT is very helpful indeed to accelerate and decelerate the scrolling parameters, like number of lines per scroll. ALT and SPACE are close together, so I avoid ALT, but SPACE can be very useful for the main activations.
Good uses of selection mechanisms also allow the user to feel in control of the application, which increases the use.
Finally, the idea of MODES is more interactive than deeply nested menus.
Have you ever had this event when you did not want to click the close button, but somehow the mouse pointer got to the top right/top left on Mac, and closed the form for you? I never feel safe with the close button looming like a sword over my head, so I propose using a form region and double click for exiting. It’s so intuitive, you start an application by double clicking it (most do) so you might close it in a similar fashion. It takes the thinking, searching, pointing out of the picture.
Nowadays I am studying a very visible though niche domain called FUI design or Fantastical User Interface design. You check out Hollywood movies and you look at every computer screen while the protagonists of the story are doing something in it, and you see some sort of weird, out of this world, amazingly fast, amazingly good looking panels that display text and visuals in a great way.
Also check out Mark Coleran’s work in this area, it’s just amazing how much of UI design is so much hard work. Getting it to look good and be useful enough is the challenge. Well for much of Windows related stuff getting it to look good is THE challenge, usability factors come second. So the essentials I could understand from such work are that an element of inspiration or creativity should belong in every work, but the essentials of design still apply.
Use of large text to communicate information, using flashing and timers, animations that simulate some sort of movement, and populating the screen to give an illusion of complexity are among the other gems from this goldmine. Also creative inspiration from real life products and other software tools and of course a spice of fantasy or the unreal is the melange that creates this beautiful world of high tech machinery. And not a single line of code is written. It’s all done in Photoshop and animated in After Effects. Well we can code, so why don’t we do something about it?
Moving On
Continue to experiment with better use of visuals to communicate ideas and make your application comfortable to work with. Nothing is set in stone, and more than we would like to admit it, mistakes have been made, a lot, so why not learn from them?