The International Information Systems Security Certification Consortium or, more succinctly, the (ISC)², tells us that security engineering consists of…
[T]he practice of building information systems and related architecture that continue to deliver the required functionality in the face of threats that may be caused by malicious acts, human error, hardware failure, and natural disasters.
It goes on to say that it must maintain the security and integrity of its data and not give up any information that is not specified by its programming. To assure that this actually takes place we use software testing. Designers, programmers, and engineers are expected to exercise due diligence in support of software security.
What Is Due Diligence?
It is the exercise of the care that would be expected of a reasonable person in a given circumstance. Specifically, in terms of information security practices, it means that an individual or group takes the steps to identify risk, and employs risk management practices that will best protect the system and/or information according to established industry best practices. It also extends to software and technology procurement.
Testing Takes Many Forms
During development, a number of different testing methodologies and software security testing tools are available. Let’s look at some of them briefly.
Black box testing amounts to functionality testing. This is the traditional methodology of criminal hackers. In its simplest terms, they know that a certain input produces a certain output. They have no idea what methodology produces the result but, by experimentation, they can gradually make good guesses and try to manipulate the program into producing results it was not designed for. However, this is also a valid testing method during development to assure the functionality of the program.
For instance, in this trivial case example, they may know that as a consequence of inserting two values a new value is produced; let’s say: 2 and 2 with an output of 4. Is it “adding” or “multiplying”? Are those the only possibilities? Let’s try a different number pair, say 3 and 4, which, if the first possibilities were true, could output either a 6 or a 9, but the result is actually 81. What single explanation holds true for everything that has happened? It looks like the first variable is a normal decimal, and the second variable is being used as an exponent so what at first appeared to be either 2 + 2 or 2 × 2 is actually 22. And in the second case it was 34, or 3 ×3 ×3 ×3 = 81.
Obviously it’s much more sophisticated than this. Typically black box methodologies involve error guessing by experienced hackers that know where they are likely to experience faults or errors that they can exploit.
It can include boundary-value analysis. Another trivial case is a furnace with two possible states, on or off. This state is determined by whether the temperature is less than 70° F or greater than 70° F. Could you cause a system failure by telling the system that temperature is exactly 70° F? This is a boundary failure. >70 and <70 leave the actual number 70 unconsidered as a valid state.
This is the exact opposite of black box testing. It is sometimes called clear box testing because the tester has complete access to the coding for the program and can see how every input is treated and how every output is created in intimate detail.
With complex programs, sometimes the sheer amount of detail can be overwhelming. It can actually inhibit effective testing. Originally white box testing was confined to individual components or “units,” but our testing methods have grown more sophisticated over the years to the point that we can now test interoperability between units, and we can also test inputs and outputs system-wide.
The single largest problem, however, is testing something so large and complex that some things can be overlooked. This is particularly true if some units were not yet implemented when the system test was run.
This type of testing generally encompasses control and data flow, along with branch testing. It looks at statement and decision coverage to make sure the program has a response (even if it is simply throwing up the correct error message) for every situation. And finally, it covers path testing to make sure that every query has a destination, and that there are no dead ends.
Gray box testing is a mixture of the two previous methods. The tester can use the straightforward (and fast) methodology of black box testing, but has access to the remarks, commentary and actual code of the program, as in white box testing.
When testers get stuck on some aspect during black box testing, they can quickly refer to the code which can help them eliminate the need for certain tests and help them design more effective tests. It’s basically the best of both worlds.
Static vs. Dynamic Testing
These are two sides of the same coin. They are not in opposition, nor are they mutually exclusive.
Static testing is done in a non-runtime environment. This is where you go through the code line by line, module by module, object by object, and look for broken loops, coding flaws, and even possibly some back doors into the system. These can be created by designers to access the system if an incorrect security protocol locks everybody out. It is important that these are removed before the code goes “live,” and are not simply forgotten about.
This is, in fact, how God Mode was invented in video games. Designers, during testing, needed to test a particular aspect of the game but didn’t want to have to play through the entire game to get to the area that they wanted to test. By entering a specific command they could walk through walls, give themselves infinite ammunition, and be invulnerable to attacks by the creatures of the game.
These controls were left in the game, and clever players soon discovered them. In games, it’s merely fun. In a business scenario, it can cause immense security problems, a credibility disaster with your customers, and become a living nightmare to repair.
This is essentially “live testing,” within a runtime environment, although generally not while being used by actual customers. Here it becomes the job of ethical hackers and penetration testers to find any faults. These experts, often retired real-life hackers, or reformed white-collar criminals, bring their expertise to bear on the best work of your programmers.
It’s a wonderful learning environment for your coders to see exactly what is exploitable. It brings a new perspective and makes them aware of the fact that what seems safe and innocuous is far more vulnerable than they imagined.
Every time they get hacked they learn something. And each time they become more determined not to get hacked again. After they’ve been exploited a few times by your experts, they start turning into really solid programmers who rarely make mistakes.
Pretty soon you’re going to have a team that is the envy of the industry. Of course not everybody can afford to hire their own ethical hackers and penetration testers, but there are companies that provide the services at a much lower cost than retaining a fulltime employee.
The same rules apply when acquiring software for yourself or on behalf of others. In much the same way that a company might install a brand-new SAP System, and instead of buying the personnel module for SAP, they might want to integrate it with their extant personnel management program (e.g., PeopleSoft).
These systems were not designed to work together. They’re both reliable systems, with decent security, but the point where they meet, their interface, could expose both systems to exploitation.
This is true in every instance where you put any two foreign pieces of software together. The output of one has to match the expected inputs of the other. Will an output from one program crash the other because it overflowed a buffer with an unexpectedly large value? Can one system handle an error code from the other system?
All this and more must be taken into consideration. The number one rule about adding any piece of software to an existing system is that you cannot assume that the new piece of software was written correctly and encoded perfectly.
You may not be able to know every single piece of output that a new program is capable of but, if your own code is written properly, it will only accept permissible values for inputs, so that even if the new piece of software fails, it doesn’t take the rest of your system with it.
Regard it as military-style compartmentalization. Their system is designed where something can fail individually, but everything can’t fail collectively. It is an excellent philosophy to adopt and keep in your business tool box.
Nothing is perfectly secure. Something that was impenetrable yesterday may be hackable tomorrow. It is a constantly escalating game of skill vs. skill. The best we can do is to be at the top of our game every day and don’t let them get a step ahead of us. We may not be able to win, but we certainly can’t afford to lose! Stay sec