General security

HD Moore Reveals His Process for Security Research

March 22, 2011 by Jack Koziol

Headshot of H.D. Moore in a dark suit and silver tie.In our ongoing series of interviews, we got HD Moore to answer a few questions and pull back the curtain a bit on the methods, tools and motivation for the research he does discovering security exploits.

HD Moore is Chief Security Officer at Rapid7 and Chief Architect of Metasploit, the leading open-source penetration testing platform. HD founded the Metasploit Project in the summer of 2003 with the goal of becoming a public resource for exploit code research and development. Prior to joining Rapid7 and continuing his work on the Metasploit Framework, HD was the Director of Security Research at BreakingPoint Systems, where he focused on the content and security testing features of the BreakingPoint product line. Prior to BreakingPoint, HD spent seven years providing vulnerability assessments, leading penetration tests, and developing exploit code.

What motivates you to find security vulnerabilities?

My role as CSO of Rapid7 involves vetting the products, technologies, and services that my organization uses in the course of business. This process of testing these products and determining which solutions we can safely use drives the majority of my vulnerability research today. I still do some vulnerability research and reverse engineering for the fun of it, but most of my day-to-day work has a direct impact on how we choose solutions.

What are the primary tools you use, and how do you use them?

I depend on an extensive library of virtual images in order to simulate the production environment for a tested solution. Currently, my virtualization software of choice is VMWare Workstation, but I have used Xen and Qemudo in the past. On the analysis side, the Metasploit Framework contains a number of modules that are directly useful; from network sniffers to fuzzers and executable analyzers, it provides much of my “live” testing of the product. The API makes it easy to build new tools and modules as needed for the particular project. On the static analysis side, IDA Pro Advanced is a key tool, along with the BinDiff addon. The rest of the toolkit consists of standard networking and debugging applications, programs like socat for configuring network relays for man-in-the-middle testing, and WinDbg for live analysis of Windows applications.

How do you choose your target of investigation? Do you pick your target application and look for bugs, or look for a genre of bug in many different applications?

My projects tend to be focused on a specific solution, making it relatively easy to prioritize target services based on the attack surface. In some cases, such as my work with VxWorks last year, it involved testing a wide variety of hardware platforms all running the same base operating system. If the research turns up a problem in a common library of code pattern, I try to plan for a wider review to identify other affected applications.

How do you handle disclosure? Which vendors have been good to work with and which have not?

The Rapid7 disclosure policy is fairly simple, we provide vendors with information about the flaw, and after 15 days communicate the vulnerability information to CERT. CERT has a fixed 45-day disclosure that results in the report-to-advisory period taking approximately 60 days. As a courtesy, I usually wait 30 days from the release of the advisory to adding the exploit to the Metasploit Framework.

What are you working on currently?

I recently finished audits of various file transfer appliances and QA test plan management systems. The results were typical, some widely deployed solutions suffered from endemic flaws making them unfit for deployment, while others were able to fix a few minor issues to present an acceptable solution. I am looking into a niche technology area now that may result in useful results by this summer.

How do the memory protections built into Windows 7 change vulnerability research for you? Which specific protections do you feel impact exploitability the most?

DEP/NX, ASLR, SafeSEH, and stack cookies have each contributed to making the exploitation process on modern Windows more difficult. It’s hard to say which was the most important, since they are all inter-dependent and their effectiveness is due to the combination of methods. At the most basic level, the addition of a working ‘no execute’ flag to the x86 platform (NX/hardware DEP) made the biggest change to how exploits are written.

What features of Metasploit do you regularly use in your exploit development process? Which would you recommend to someone just getting started?

The massive number of protocol libraries and the various exploit development tools included in the framework make finding and exploiting vulnerabilities a breeze with Metasploit. The most important and likely least appreciated tool in the framework is ‘msfpescan’. This command line utility can analyze any EXE or DLL for specific patterns and reduces the amount of time it takes to find viable return addresses and ROP gadgets. The fuzzers within Metasploit make it easy to test new implementation and serve as guides for building simple protocol fuzzers using common techniques. These fuzzers implement bit corruption, byte sweeping, and per-field bad input testing.

Is the future pen testing platform of choice something more like Stuxnet or Metasploit?

Penetration testing as a process assumes that the security professional conducting the test can bring the appropriate tools. In the case of something like Stuxnet, the tool has to bundle everything it needs for easy copying between systems. Given the difference in goals and impact of detection, the Metasploit model will likely remain the method of choice for the forseeable future.

Of all of the security bugs you have discovered and written exploits for, which is your favorite or most notable?

My favorite exploits are those that rely on strange ways to get code execution. These include an old vulnerability in the Veritas (now Symantec) BackupExec agent. This service had a flaw that resulted the virtual function table of an object being corrupted, leading to a condition where the pointer of a pointer was being called. Exploiting this requires the knowledge of a pointer in memory that in turn points to shellcode. The trick for this exploit was to pick an address high in the heap and send repeated requests that triggered an error handler, preventing the request from being freed. Eventually the newly allocated block of memory would line up with the chosen address and the shellcode would execute. This technique is common for client-side vulnerabilities where the attacker has direct control of memory allocation, but is much more difficult for server-side flaws.

A big THANKS to HD for taking our questions and the great tools he continues to develop. Check out the InfoSec Institute site where you’ll find our boot camps to learn how to do everything from Ethical Hacking and Web Application Pen Testing as well as test prep courses such as CSSLP training and Cisco’s CCNA certification.

Posted: March 22, 2011
Jack Koziol
View Profile

Jack Koziol is president and founder of Infosec, a leading security awareness and anti-phishing training provider. With years of private vulnerability and exploitation development experience, he has trained members of the U.S. intelligence community, military and federal law agencies. His extensive experience also includes delivering security awareness and training for Fortune 500 companies including Microsoft, HP and Citibank. Jack is the lead author of The Shellcoder's Handbook: Discovering and Exploiting Security Holes. He also wrote Intrusion Detection with Snort, a best-selling security resource with top reviews from Linux Journal, Slashdot and Information Security Magazine. Jack has appeared in USA Today, CNN, MSNBC, First Business and other media outlets for his expert opinions on information security.