Stefan Esser is best known as the PHP security guy. Since he became a PHP core developer in 2002 he devoted a lot of time to PHP and PHP application vulnerability research. This led to the foundation of the Hardened-PHP Project in 2004 and the development of the PHP security extension Suhosin in 2006. He is also the co-author of a german book about PHP security and is a regular speaker at a number of security conferences like BlackHat, SyScan and Hack in the Box.
In 2007, Stefan was listed by Eweek as one of the 15 Most Influential People in IT-Security after having organized the Month of PHP Security. But even before then, he had already released lots of advisories about vulnerabilities in popular software like CVS, Samba, OpenBSD and Internet Explorer.
Because of that work, researchers at X-Force still list him among their top ten of all-time vulnerability disclosers.
Since 2007, he works as head of research and development for the german web application company SektionEins GmbH that he co-founded. Part of his research has been the development of an ASLR implementation for a jailbroken iPhone that he demonstrated at the end of 2010, several months before Apple added this feature to the stock iOS. In 2011 he provided an iOS kernel exploit that is the key ingredient in all current iPhone jailbreaks.
What motivates you to find security vulnerabilities?
When I started to look at applications in order to find security vulnerabilities I was still at school. For me it was really fascinating to realize that there were so many classes of programming errors that could be abused to gain control of a system. I considered it a treasure hunt or game to find these problems before anyone else would find them. But it was even more fun to actually write exploits for these problems because they were sometimes quite tricky. To be fair this was about 15 years ago and software at that time was a lot more insecure than today. It was a lot easier to find these bugs and also to exploit them. However the increased difficulty today just makes it even more fun for me.
During my fifteen years of infosec I experienced a phase where I thought that by killing as many bugs as possible I would make the world a safer place. Nowadays I believe that only killing bugs is a fight against windmills. It is far more effective to kill classes of bugs with new mitigation techniques, which is also something I have done repeatedly in the past.
What are the primary tools you use, and how do you use them?
Because I work in different fields of infosec that range from simple web application audits to low level kernel exploitation on closed systems like iPhone or Xbox I use many different sets of tools. For web application audits I often use the Burp proxy suite because it combines a powerful tool for manual analysis with a simple but often effective automatic black box scanner. There are other smaller third party and SektionEins internal tools that I use. But I keep my fingers away from the big black box web application scanners, because I don’t trust those black boxes to effectively find vulnerabilities.
When you look at my other work you will see that I also love to use IDA and the Zynamics BinDiff and BinNavi tools. I don’t think I could have done any work on the iPhone without these tools. For source code audits I use a simple editor like Textmate. While others rely on tools like Source Insight I have learned to live successfully without them.
And of course I use a lot of my own scripts that are written in Python or PHP beside the usual things like a C compiler.
How do you choose your target of investigation? Do you pick your target application and look for bugs, or look for a genre of bug in many different applications?
One must distinguish between targets I investigate for work and targets I look at in my free time. My work is mainly dominated by customer contracts. So the customer decides what targets I am supposed to look at and how much time I should spend on this. So in these cases I will have a target and try to find as many bugs as possible in the allocated time. This is different than the targets I investigate in my free time or during research projects. For example, if I discover a specific new class of attacks like property oriented programming attacks against PHP applications that use unserialize on user input, I will check many applications for this type of flaw. There are many sites that have a long list of PHP applications so I just go through each of them. However, in other cases I will concentrate on a specific application for a while and look at them until I am bored. Usually these applications are applications I have to use for some reason, that are very popular at the moment or are used by my friends.
When I am not working for a customer I also try to ignore vulnerabilities that are below a specific severity threshold. I will usually try to go after vulnerabilities that give me full control.
How do you handle disclosure? Which vendors have been good to work with and which have not?
If I decide to disclose a security problem I usually send an email to the vendor detailing the problem, then wait for a while and finally do a public disclosure.
But I also have skipped informing the vendor first in some cases because I consider informing them first useless, if previous encounters have shown that the vendor will produce a publicly viewable fix and then not release a security fixed version to the public. In the past I also have presented about 0-day vulnerabilities during talks at conferences.
Because most of my public disclosures were many years ago, I do not want to point out bad vendors. Vendors have evolved over time and therefore it would be unfair to call out most of them now for mistakes of their past. For example, my experience with disclosing a bug in Internet Explorer ten years ago was not that great, but that were different times and that was a completely different Microsoft. On the other hand, vendors like Linux and PHP still have a long way to go before I would consider their handling of security issues optimal.
What are you working on currently?
At the moment I focus my research on iPhone security topics. I try to look at this from all sides, attack and defense. So on the one side I like to come up with defenses like my ASLR implementation for jailbroken iPhones and on the other hand I will try to break these defenses as badly as possible. That is why I research iOS kernel exploitation and also look into ways to improve the data encryption at the same time.
What do you think is the biggest challenge facing infosec as an industry?
The biggest challenge for infosec is that the security problem is not solvable with technology solutions only. The weakest link is still the user. Users tend to choose easy to guess passwords, reuse passwords all over the place or even give away their password for a piece of chocolate. They also tend to click on every link that comes their way. In a recent test on Twitter I also had to realize that several thousand people clicked on a link that explicitly said: “Click here to get infected.”
And yes users are also responsible for not upgrading their systems to the latest security updates.
Is a jailbroken iPhone more or less secure? Why does adding ASLR to a jailbroken phone make it more secure?
Jailbroken iPhones are less secure than factory iPhones because the jailbreak disables protection features like non executable memory, code signing and sandboxing.
With these features disabled, it is not only possible to bypass Apple’s DRM but also abusing a memory corruption vulnerability becomes very easy. In case of a remote vulnerability in MobileSafari it is therefore easier to develop an exploit for it.
Adding ASLR to a jailbroken iPhone makes it more secure because even on a jailbroken iPhone it is still necessary to start an attack with a short ROP (return-oriented-programming) payload. Within a ROP payload existing pieces of code that are already in memory are chained in order to create arbitrary programs.
Because ASLR randomly rearranges these code pieces an attacker can only guess the right position without additional information leak vulnerabilities. The more random the code position is the less likely an attack will succeed.
After several years of a Month of PHP Security why wasn’t there one for 2011?
We are not even halfway through 2011. Isn’t it a bit early to ask why there wasn’t a Month of PHP Security in 2011? Jokes aside there were only two Month of PHP Security in the last five years. One in 2007 and one in 2010. Organizing another one in 2011 would require a lot of time and work. At the moment I simply have other (better) things to do than look into the PHP source code.
What is your current involvement level with PHP security?
I have to audit PHP applications for work on a regular basis and I am still the main developer of Suhosin which is the PHP security extension that brings additional security features to PHP. Aside from that I don’t have time for PHP security topics for now.