For part 2 of this series, please click here.

C# is my favourite language and I definitely intend to stick with it, as the community is amazing and more and more programming paradigms are incorporated in .NET. From Eiffel to F#, IronPython to managed/unmanaged C++/CLI, you can’t go wrong with this one. From Windows to Xbox, the power is visible everywhere. I will discuss the classes I use with greater frequency when I make my own reversing tools and a few pointers here and there.

Most of the tools I make for my day to day work involve the following things from C#:

  1. Multithreading (System.Threading.Thread/BackgroundWorker component)
  2. UI and resource hog algorithms decoupled i.e. responsive applications
  3. Extensive use of events and delegates for communication between the various forms and controls
  4. File system classes (File, Path, DirectoryInfo, FileInfo, FileSystemInfo) that encapsulate the Directory and File objects
  5. FileStream/MemoryStream classes to work with dynamically read or generated data
  6. BinaryReader and BinaryWriter classes
  7. Extensive use of collections and generics
  8. Structs – readonly fields /Enums
  9. Strings manipulation classes and methods
  10. Properties –get and set accessors
  11. Regex
  12. Process class for starting applications and reading commandline output
  13. GDI+ Graphics/3D

This whole set can be invoked by importing a few namespaces.

using System;

using System.Collections.Generic;

using System.ComponentModel;

using System.Data;

using System.Drawing;

using System.Text;

using System.Windows.Forms;

using System.Drawing.Drawing2D;

using System.IO;

using System.Threading;

using System.Diagnostics;

using System.Text.RegularExpressions;

The above set sums up much of my namespace laundry list.

Let’s delve into the classes themselves.

Much of reversing primers includes format parsing of some sort, whether it’s a PE file or a Dex file or PDF or JPEG and so on and so forth. For this I use a byte array. It’s very convenient to have direct representation of the actual bytes as integer values from 0-255 for every byte of a corresponding format. Once that is done the parsing involves use of mainly walking through the array and extracting specific lengths at specific offsets. These data are extracted from the headers of the respective formats.

Of course, if editing the file is required, then using an array would be expensive for large files, so a List<byte> type can also be used. While using BinaryReader to fill an instantiated array is fine, the performance is far better when using File.ReadAllBytes() method. This takes the path of the target file in the file system, could be any binary file, and then reads it to a byte [] as a return value. For reading in a series of files for later manipulation in memory, I use a List<byte []> or make it more nested by adding a list into another accumulator list like, List<List<byte []>>. It’s a lot simpler while enumerating long lists for graphic displays and this helps a lot, while being effective for searching a particular value as well as addressing a specific type within a type.

I use structs extensively for any sort of custom data structure template along with values retrieving and setting implemented. This saves a lot of time for later processing, and being a value type is easily passed around within List<struct type> for better manipulation, e.g. List<dexFileStruct>. Sometimes references don’t work the way you want with structs, so resetting a struct field other than the constructor is a bad idea. The solution is to make the fields readonly and instantiate a new struct and work with the new set of fields and point to it if needed for lists or stacks. Don’t try to reset an existing struct field member.

Properties make life a lot easier for both classes and structs. Also exposing types using this method immediately gives a security factor by ensuring the entry and exit points of a particular field value.

Moving on to filesystem access and enumeration, the best way to get a list of directories is to instantiate the DirectoryInfo class and get the FileInfo array for every directory within the root. This process can be repeated for every directory level. Using FileSystemInfo class also works but unless you need extensive drilling capabilities. I suggest using recursion with the above mentioned classes. Also FileSystemInfo is a bit buggy sometimes and crashes for no apparent reason.

DirectoryInfo d = new
DirectoryInfo(targetPath);


DirectoryInfo[] di = d.GetDirectories();


foreach (DirectoryInfo i in di) {


FileInfo[] f = i.GetFiles(“*.dex”);


foreach (FileInfo j in f) {

Console.WriteLine(i.Name);

ProcessStartInfo ps = new ProcessStartInfo();

ps.FileName = “cmd.exe”;

ps.CreateNoWindow = true;

ps.UseShellExecute = false;

Want to learn more?? The InfoSec Institute Ethical Hacking course goes in-depth into the techniques used by malicious, black hat hackers with attention getting lectures and hands-on lab exercises. While these hacking skills can be used for malicious purposes, this class teaches you how to use the same hacking techniques to perform a white-hat, ethical hack, on your organization. You leave with the ability to quantitatively assess and measure threats to information assets; and discover where your organization is most vulnerable to black hat hackers. Some features of this course include:

  • Dual Certification - CEH and CPT
  • 5 days of Intensive Hands-On Labs
  • Expert Instruction
  • CTF exercises in the evening
  • Most up-to-date proprietary courseware available

ps.Arguments = “/c “ + Environment.CurrentDirectory + “\dexdump\dexdump.exe”

+ ” -d “ + “””+j.FullName+“””;

ps.RedirectStandardOutput=true;


using (Process pi = new Process()) {

pi.StartInfo = ps;

pi.Start();


string temp = pi.StandardOutput.ReadToEnd();


using (FileStream fs = new
FileStream(Environment.CurrentDirectory + “\dexdumpOutput\” + i.Name + “.txt”, FileMode.Create)) {


using (StreamWriter sw = new
StreamWriter(fs)) {

sw.WriteLine(temp);

}

}

The above snippet illustrates the use of commandline through cmd.exe and collecting the output of another commandline tool to memory and then to file. The ProcessStartInfo and the Process class are used for the same, note the various fields in each class set to get the intended output. The separate classes are a good design decision in code when the parameters to a class are numerous and complex, its arguments can be encapsulated in a separate class which can be referenced by the other class requiring it. Very robust.

Next, threading can be really simplified using the backgroundWorker component. A C# component exposes functionality but does not provide a UI. So a backgroundWorker exposes three events DoWork, ProgressChanged and RunWorkerCompleted. It also exposes a Boolean property ReportsProgress. I normally don’t use cancellation a lot and build the UI around it, taking things up in chunks and then processing them. To initiate the background process, you need to call the RunWorkerAsync(<Object argument>) and pass any input parameter, that needs to be typecasted inside the DoWork eventhandler code, this could typically be a file/folder path or a list of user data types sent for processing . The DoWork eventHandler is where you write your most resource hogging codes. If any updates are required during the operation, you can send a percentage completed integer value along with an object instance that can contain the userState, this could be any data type, which has to be typecasted later on, in the ProgressChanged eventHandler. After completion of the task the RunWorkerCompleted event is triggered and the handler can be used to write code that completes any task post completion. You can use as many backgroundWorker components as needed giving maximum flexibility. Couple that with Timer class instances and you get a very good threading model implemented.

Next up, strings are immutable in C#. There are classes provided that make efficient use of memory and processing power to manipulate strings. Simple concatenation for intensive strings is a lot more expensive than using any one of the classes dedicated to working with strings. I many a times need to get a character array so that each string literal can be thoroughly analysed. The <string type>.ToCharArray() method does just that. I find the Trim() method very useful for removing specific characters from a string. Split() method takes a char [] and splits the string at those pivot points. The StringBuilder class is very useful when building long lists of strings after extensive parsing. Very simple, just instantiate and use the Append()/AppendLine() method.

The Regex class’s IsMatch() method returns a bool value for a string pattern against an input string. Quick and dirty. “[aA][dD][1-9]” searches for any string containing upper or lower case ‘a/A, d/D’ and any digit from 1-9, in that order and sequence. It’s a very powerful method for fast extraction of certain strings from long logs or dumps.

So much of the real world code that I write uses all and any of these in various ways dictated by results required.

For my current software pursuits I am using all of what C# can offer, but I tend to keep things streamlined and once I find a good set of classes I begin implementing the logic myself using the code constructs provided by the language. Thus, you have I/O, multithreading, regular constructs for programming, extensive graphics support, streams for fast byte level processing, excellent debugging utilities, provisions for unsafe code, and use of the Win32 API if needed, networking code among others. I think the power of rapid application development is very evident and the benefits far outweigh any cons of using C#. In fact you can use just about any language that’s in vogue and collaborate with other developers on a common platform. In the end I think that’s the biggest advantage. Think about it, are you DotNet wise, and if not what are you missing out on?