Practical Shellshock exploitation – Part 1

October 31, 2014 by Srinivas

Topics covered

  • Introduction
  • What is Shellshock?
  • When can it be exploited?
  • How to check if you are vulnerable
  • Checking your bash version
  • Running the fancy one-liner on your terminal
  • Technical insights of Shellshock
  • The basics of bash shell variables
  • Introducing bash environment variables
  • Exporting bash functions to environment variables
  • Parsing function definitions from strings
  • The actual vulnerability
  • Possible exploits


Shellshock is now one of the buzzwords in the security community. After “Heartbleed”, it is the most widely spread word in the recent past. This article first gives you the internal details of the vulnerability. Then it walks readers through the step-by-step procedure of how to set up their own lab to demonstrate a Shellshock vulnerability along with the exploitation part.

What is Shellshock?

Shellshock is a vulnerability in GNU Bourne Again Shell (BASH), which allows an attacker to run arbitrary commands using specially crafted environment variables.

When can it be exploited?

This is the most important piece of this article. Before understanding how to exploit this Shellshock vulnerability, we need to understand the potential targets that are vulnerable to Shellshock. This will also help us in building a lab to demonstrate how to exploit this vulnerability.

If you have read some news about Shellshock on the Internet, you might have heard about vulnerable targets as follows: Apache mod-cgi, SSH, DHCP, etc.

I will make things clear using SSH as an example.

Your SSH doesn’t really need to be exploited if you are using OpenSSH as an SSH Server as well as bash as your default shell. There are a few limitations in order to exploit this, as explained below.

You may be vulnerable if you have implemented “authorization_keys” for your clients with some specific requirements like “force command” execution before the user executes the commands.

Well, you don’t need to worry about this right now, as we will discuss it in detail in a moment.

As of now, please keep in mind that “our services become vulnerable if we are using any program that uses a vulnerable version of bash as an interpreter and if the attacker is able to control the value of an environment variable that is being passed to bash”.

Why? Because this is not a vulnerability in SSH; rather it is the vulnerability in “bash”.

How to check if your bash is vulnerable

Checking for bash version:

Bash versions through 4.3 are known to be vulnerable. So, one way of checking is to check your bash version using the following command.

$bash –version

Cool! This is vulnerable.

Running the fancy one-liner in your terminal:

There is a one-liner which became very popular after the disclosure of this Shellshock vulnerability.

$env x= ‘() { :;}; echo shellshocked’ bash –c “echo test”

Running the above line in your terminal shows if you are vulnerable to Shellshock. If “shellshocked” gets printed in the output, you are vulnerable and it’s time to update.

Technical insights of the Shellshock vulnerability

Now, the following should be your questions:

What does the above line do?

What’s happening in the background?

Why am I vulnerable if “shellshocked” gets printed?

To make things clear, let’s first go through the basics.

The basics of bash shell variables

Generally, we can print something using the echo command as shown below.

$ echo “shellshock”

Now, if you want to store the above value in a variable that’s pretty similar to any other scripting language, I am going to put it in a variable called “myvar”.

$ myvar=”shellshock”
$ echo $myvar

This is shown below.

Now, let us open up a child process and see if we can get the value of the variable.

As we can see in the above figure, we couldn’t read the value set by the parent process into the child process.

Introducing bash environment variables

This is where we can comfortably talk about environment variables.

When you start your new shell session, some variables are already ready for your use. These can be called environment variables.

When we want to access the above-mentioned “myvar” variable in a child process environment, we need to make it an environment variable in order to make it available for the new process. We can do it using “export”.

This is shown below.

Looking at the above figure, we can clearly see that the sub process is able to access the value of “myvar”.

Now, just to confirm that this is added to your environment variables, run the following command.

$ env | grep ‘myvar’

In the above command, we are printing out all the environment variables and filtering out our target variable.

Exporting bash functions to environment variables

Similarly, functions can be exported to child process environments as shown below.

In the above figure, we first defined a function as shown below.

x() { somecode;}

Then called this function x.

In order to be able access it from the sub process, I have exported it using ‘–f’ flag as shown below.

export –f x

As expected, I am able to print the text inside the subshell.

Parsing function definitions from strings

So far, we have seen how bash variables and functions work, as well as how we can export them to be able to access them in a sub process.

Now, we are closer to Shellshock. J

The above mentioned function definition can also be kept as a string.

To show this, I am taking a new variable called “newfunction”. You will come to know why I am naming a variable as function in a moment.

$ newfunction='() { echo ‘shellshockdemo’;}’

As we did earlier, we are just defining a variable. Now, we can access it as a regular variable as shown below.

$ echo $newfunction

The above two steps are shown in the following figure.

Everything is as expected so far.

Now, let us export it to environment variables and access it from the sub shell as shown below.

$ bash
bash-3.2$ newfunction

Fantastic, though we are able to access it as a variable in the parent shell, it is getting interpreted as a function inside the subshell and executing the body of the function.

Let’s also look at the environment variables to check for our function.

How is it possible?

When a new shell is launched as a child process, it takes the value of the string and interprets it as a function since it is starting with ().

The actual vulnerability

Finally, we are there!

This time, I am going to terminate the function definition and pass some arbitrary commands after terminating the function as shown below.

$export newfunction='() { echo ‘shellshockdemo’;}; echo damn! I am vulnerable’

In the above piece of code, I have added, “echo damn! I am vulnerable” after terminating the function.

Now, spawn a new bash shell by typing “bash” and observe what happens. These two steps are shown below.

Bingo! The code added outside the function definition has been executed during the bash startup.

If we execute the function now, it goes as expected.

This is shown below.

Now, if you go back to one of our previous sections “When can it be exploited?” here is the answer:

Condition 1 – Your bash version should be vulnerable (through 4.3).

Condition 2 – An attacker should be able to control the environment variables being passed.

Condition 3 – A new bash shell should be spawned (sub process).

To automate the whole process, we are using “env” as shown below.

$env x= ‘() { :;}; echo shellshocked’ bash –c “echo test”

Generally, env can be used to print all the environment variables as we have seen earlier.

But if you look at the man page of env, it can also be used to run commands.

Alternatively, we can use “–help” as shown below.

First let’s look at a simple example as shown below.

$env newvar=demo bash –c ‘echo $newvar’

In the command shown above, newvar is an exported variable which is being accessed by a new subprocess and printing the value of it.

Now, if we look at our fancy command,

$ env x='() { echo accessme;} echo vulnerable’ bash –c ‘x’

We should see the following output:

Again, the same concept.

X is a variable being exported. Since its value is beginning with (), it will be treated as a function by the subshell and the definition will be executed.

But before that, our “vulnerable” will be printed upon spawning a shell.

This whole process is represented in the following figure.

By this time, you should understand why this simple on-line command is so dangerous.

Possible exploits

Below are the few critical instances where a Shellshock vulnerability may be exposed:

  • Apache HTTP Server using mod_cgi or mod_cgid scripts either written in bash, or spawn subshells.
  • Override or Bypass ForceCommand feature in OpenSSH sshd
  • Allow arbitrary commands to run on a DHCP client machine.

In the next article, we will see how to set up our own lab and demonstrate how to exploit vulnerable OpenSSH and Apache servers.

Posted: October 31, 2014
View Profile

Srinivas is an Information Security professional with 4 years of industry experience in Web, Mobile and Infrastructure Penetration Testing. He is currently a security researcher at Infosec Institute Inc. He holds Offensive Security Certified Professional(OSCP) Certification. He blogs atwww.androidpentesting.com. Email: srini0x00@gmail.com

One response to “Practical Shellshock exploitation – Part 1”

  1. Rick Karcich says:

    Re: testing for Shellshock… would like your feedback… specifically, regarding defining a testing strategy to find this Bash vulnerability…

    Problem Statement:
    Given the knowledge about Shellshock that’s been developed, viz.,
    1) http://lwn.net/Articles/614218/
    2) http://arstechnica.com/security/2014/10/ghost-in-the-bourne-again-shell-fallout-of-shellshock-far-from-over/
    3) http://resources.infosecinstitute.com/practical-shellshock-exploitation-part-1/

    I want to use combinatorial testing to increase the effectiveness and efficiency of testing the attack surface for Shellshock. To date, the successful testing for Shellshock involves fuzz testing – and the following link describes creation of a ‘fuzzer’ for Shellshock (CVE-2014-6277 / -6278), .

    Shellshock may be an interesting case for combinatorial testing for a number of reasons…

    Point 1: Shellshock is yet another case of failed bounds checking: the Bash developer failed to ensure that evaluation would stop after the closing “}” in an environment variable containing a function definition.

    But it’s even simpler than the usual buffer overflow, because you don’t have to work at formatting the bytes correctly to get them executed on the stack; you just insert a command and Bash does the rest. So it’s not a problem of developers being too lazy to manage memory properly, it’s a design problem of developers being too casual about evaluating things. It looks like any code that does any kind of evaluation should be tested on the input “exec /bin/crash_the_system”, which could be entertaining.

    In terms of inputs, there can exist well-formed and mal-formed examples of strings, numbers, filenames, arithmetic expressions, IP addresses, function definitions, assignment statements, etc. In the case of Shellshock there exist multiple vectors for the Shellshock vulnerability.
    This set grows over time as application-specific issues are surfaces, e.g., a shell might have a problem with a fileneame beginning with “-” or “–“, since that syntax is reserved for an option name, so “-badfilename” is added to the class.

    Testing a two-argument procedure then consists of giving it the cartesian product of this input class. Actually, it’s worse than that: a procedure that expects inputs of the form “ENV=” needs to be fed inputs containing every possible input on the right-hand side of the statement. For example “ENV=() { exec /bin/crash_the_system }”.

    In the known Shellshock vectors such as HTTP CGI, dhclient, etc. the only requirement is that attacker-controlled data is copied from a protocol packet into the execution environment prior to executing bash through an API that would preserve the environment.

    Point 2: The problem with Shellshock is that it depends both on the input provided and the environment it’s executed in. Apart from stock installs, there’s no way of knowing what the latter might be, and hence no way to combinatorially test that there isn’t some input for that environment that would fail.

    The combinatorial method needs to be extended vertically as well: what is the context of the input processing … a shell, … a device manager?

    More generally, is the input taken as data? Is it evaluated before it is used? Is it passed un-touched to a spawned process?

    Point 3: A number of bash vulnerabilities, including variants of Shellshock have been found via this AFL fuzzer . Michal Zalewski found the related CVE-2014-6278 Michal’s fuzzer(AFL) found bugs that had been in Bash for years and were never reported. This is a testament to the value of the approach. It just takes a lot of work to wade through the results.

    It is possible to measure the combinatorial coverage achieved by these fuzz tests. I propose investigating combinatorial coverage achieved by running Zalewski’s AFL and exploring the space of triggering mechanisms and based on the coverage measurements choosing the most effective set of test variables. Then change the fuzzer to increase coverage based on combinatorial coverage measurements.

    To measure the combination coverage, we just need test values in a matrix where each row is a test and each column represents a Bash parameter/environmental variable.

    Additional instructions regarding AFL are at:

    Point 4: It’s not clear that Shellshock’s etiology would have been appropriate for automated testing. It was code that did what was intended: the problem is that it was invoked in (remotely-triggered) contexts that were unintended.

    The triggering condition was the appropriately-formatted environment variable. The issue that made it dangerous was that there were circumstances under which remote users could specify values that were turned into environment variables by applications that ended up invoking Bash. In a local environment, it was a non-issue: a clever way to execute commands bash would have let you execute in a more straightforward way. This has implications for setting up any proposed testing environment for Shellshock.

    If the attack surface is the shell parser, then one could just run

    bash -nc ‘randomly-generated-string’

    In the case of Shellshock, if sending untrusted input to the Bash parser were a goal, then extensive testing of that parser would be critical…

    The real solution, in my opinion, however is not to send untrusted input to the parser… this feature needs to be disabled by default…


    Discussion of the problem, Heartbleed & Shellshock:
    Explore finding these vulnerabilities through fuzz testing and hypothesize how they’ve escaped detection via testing…

    The Heartbleed and Shellshock vulnerabilities are independent in nature and need to be considered separately.

    For Heartbleed, there’s network input controlling the size of a memcpy, resulting in a memory leak. This was exacerbated by the reuse of pre-allocated buffers, meaning the attacker would always have a valid chunk of memory to read.

    The reason Heartbleed was not found by fuzzing is because the heartbeat extension option of the SSL spec is rarely implemented in protocol fuzzers or even in legitimate clients or servers very often and there was no possibility for an exception to indicate something was wrong due to the memory reuse.

    The reason Heartbleed was not found by static analysis is because the taint analysis was too conservative. Coverity has since added coverage for static analysis by including an additional taint source for data that passes through a ntohs() type function that implicitly indicates the data comes from the network. In the case of OpenSSL, they had their own macro re-implementing the byte-swap. This is documented here: http://security.coverity.com/blog/2014/Apr/on-detecting-heartbleed-with-static-analysis.html.

    In short, Heartbleed wasn’t found through static analysis engines without adding specific logic for the taint source. It would not be discoverable through generational fuzzing because it is a logic bug and does not generate an exception. You would need static analysis such as described in the Coverity blog or dynamic taint analysis in addition to a concolic (http://en.wikipedia.org/wiki/Concolic_testing) fuzzing engine such as KLEE or SAGE (http://www.recon.cx/2014/slides/Fuzzing%20and%20Patch%20Analysis%20-%20SAGEly%20Advice.pptx) and a defined taint check that validates the packet data being sent back to the client. This currently does not exist although would be an interesting exercise with KLEE (http://hci.stanford.edu/cstr/reports/2008-03.pdf) or SpecExplorer.

    For Shellshock, there’s network input being passed to a child process via the environment, and resulting in arbitrary command execution through a ‘by design’ feature of Bash. There should be a known security boundary here because historically environment variable and argument parsing is shown to be an attack vector for locally exploiting setuid programs. Further it is the case that any network program that executes a child process with attacker influenced arguments or environment is part of the attack surface.

    In practice, it is difficult to automate this without whole-system taint analysis (ala Bitblaze or PANDA) since it involves a pivot from network->server->bash and would have an extremely high false positive rate. It would also require specific asserts on the code logic that executes bash script functions. So, unfortunately, this is a logic flaw that also could not be found through generational fuzzing. I would be very interested to see if KLEE ( http://hci.stanford.edu/cstr/reports/2008-03.pdf) or SpecExplorer can handle this situation.

    In this case for Shellshock, from , it is also useful to look at the final patch that came out which eliminated the external function execution ability of Bash. By seeing the code they patched, it gives us a clue as to characteristics of the code we should be checking and if we can generalize those characteristics then we can search other software as a starting point.

    More Discussion of the Problem,
    – While checking for the conditions that trigger ‘Shellshock’ doesn’t look hard, it’s validating all possible configurations in their class that would be prohibitively expensive…
    – While it’s possible to fuzz(ily) test Bash, the problem is, first, we have to find a way to generate strings that maximize the chance of being a genuine command or a command that triggers the Shellshock vulnerability. This is time-consuming and expensive…
    – Second, once you generate a command, how will your fuzzy test program know if it found the vulnerability? It’s easy when Bash segfaults, but in the case of Shellshock, it wasn’t a crash. It was rather that Bash was executing code where it shouldn’t. Not even humans were able to tell that for more than 20 years.
    – It’s hard to tell whether Bash reporting a syntax error is a true syntax error, or a genuine vulnerability. The odds are considerably in favor of the former, which make it hard to weed out the false positives.
    – Explore how KLEE, SAGE, SpecExplorer could play a role in eliminating vulnerabilities like Shellshock…
    – Specifically explore testing with virtual test parameters like the testing that might be done for a “find” function with two arguments, string and file name.
    – It’s not clear that Shellshock would have been appropriate for automated testing. Bash was executing code that did what was intended: the problem is that it was invoked in (remotely-triggered) contexts that were unintended. Neither Heartbleed or Shellshock are appropriate for traditional generational fuzz testing, both would require taint analysis and a dataflow rule for the triggering condition(s).
    – For Shellshock, the triggering condition was the appropriately-formatted environment variable. The issue that made Shellshock dangerous was that there were circumstances under which remote users could specify values that were turned into environment variables by applications that ended up invoking Bash. In a local environment, it was a non-issue: a clever way to execute commands Bash would have let you execute in a more straightforward way.

    For such a “find” function we might define the following set of test parameters and values in an input table/model:
    String length: {0, 1, 1..file_length, >file_length}
    Quotes: {yes, no, improperly formatted quotes}
    Blanks: {0, 1, >1}
    Embedded quotes: {0, 1, 1 escaped, 1 not escaped}
    Filename: {valid, invalid}
    Strings in command line: {0, 1, >1}
    String presence in file: {0, 1, >1}
    Similarly, for Shellshock, perhaps the same approach could be used for testing shell commands, with test parameters that include things like environment variable, with values such as numeric, alpha, function etc … in an input table/model to capture triggering mechanism(s)?
    – ENV variable: {numeric, alpha, function}
    – As I understand it, for CGI scripts the ‘Shellshock’ issue occurs when the defined environment variable is a function AND the REMOTE_USER variable is set to an account that is present on the host…
    – It seems reasonable that testing could also include a test parameter such as “remote-user-present” with values “yes” and “no”. An input model that included these two test parameters, plus the various configuration options, could presumably find the problem. As always, it depends on the input model and test parameters defined, but these seem reasonable, and pretty basic for testing. The test parameters and input model might also be re-used in testing a variety of shell commands to possibly discover other vulnerabilities…
    – Shellshock is triggered by a specific 4 byte sequence at the beginning of the value of an environment variable…
    – in this case, that 4 byte sequence wouldn’t, by itself, do much … an attacker then needs to have syntactically valid bash input after it, and in most of those cases, nothing would happen…
    – an attacker would have needed to provide specific syntactically valid input that would cause undesired output … the most obvious type of which would have been a segfault…

    Testing Alternatives:
    Unfortunately network-based scanning for vulnerable ShellShock servers is nowhere as easy as identifying the Heartbleed servers since the triggering of execution of the bash shell is usually very specific to each application. Even to effectively scan HTTP servers, one needs to know the path to all of the CGI scripts that are dependent on bash and sometimes even the specific GET or POST parameters that need to be supplied to the script in order to trigger the vulnerability. We have preloaded the scanner with almost 400 common CGI paths that will be attemped during the full scan and have allowed the import of additional paths to test custom or less popular CGI applications.

    The scanner works by sending an HTTP GET request to each pre-configured CGI path of the scanned target with the following headers:

    Cookie: () { :; }; echo -e “rnrn”
    Referer: () { :; }; echo -e “rnrn”
    User-Agent: CrowdStrike ShellShock Scanner/1.0
    Test: () { :; }; echo -e “rnrn”

    When the CGI script launches Bash with the supplied environment parameters, it should trigger the execution of the echo command on a vulnerable system. With most scripts, the random string in the output of the echo command will be sent back in the body of the HTTP response, allowing the scanner to detect it and deem the system vulnerable. We deliberately picked the innocuous echo command as the one to execute by the scanner so as to minimize the chance of the scan doing anything harmful to the vulnerable target.
    Please note that even a full internal and external IP range scan of your network will not provide you with a complete assurance that you are not vulnerable to Shellshock. In addition to the limitations of scanning CGI applications, this scanner is not able to determine the vulnerability of SMTP servers or DHCP clients to the bug. Nor is it able to be used to test for privilege escalation vulnerabilities via SSH or on local Unix and OSX systems. It is still paramount that you apply patches across your entire population of systems that utilize bash shell as soon as possible.
    An indirect approach of analyzing program use of environment variables that could be leveraged across a security boundary is a possibility but I don’t have a high degree of confidence it would generate much other than a catalog of things to look at when analyzing other programs that execute child processes. For example if we found that ‘/bin/X’ could be crashed via environment input, then you might want a tool that scans source for any calls to X as a child process and a secondary tool that determines if external input can influence the environment at the time of the execution of X. Again, this is falling back to an implementation class bug that would throw an exception and be discoverable through traditional fuzzing. In the case of Bash even if we had a tool that did the above, we would not have found Shellshock because there is no defined exceptional condition (oracle problem).

    The main problem here is to identify the exceptional condition and formulate a model that is appropriate for dataflow analysis to detect the condition. Then depending on the constraints and the specific target, a static or dynamic approach may be taken to analyze for that condition. I think of this problem similar to how I’d approach trying to find SQL injection, which has been covered fairly well in .NET due to its inherent ability to provide dataflow analysis and reflection. This is the layer of abstraction you want to be looking at for problems like this as opposed to generational input fuzzing.

    – Change the fuzzer to increase coverage based on combinatorial coverage measurements…
    – Employ combinatorial testing for testing the triggering mechanisms surrounding ‘Shellshock’…
    – Learn more about Shellshock and put together an input model like that described in the attached(test-parm-partition~)…
    – Continue running the AFL fuzzer on latest patched versions and measure combinatorial coverage achieved…

Leave a Reply

Your email address will not be published.