General security

LAYER 7 DDOS ATTACKS: DETECTION & MITIGATION

Dimitar Kostadinov
December 4, 2013 by
Dimitar Kostadinov

Introduction: Initial Detection/Mitigation Challenges

Before we go to the main topic of this article, let us take heed of two factors that exacerbate the buildup of effective defensive powers against Layer 7 DDoS attacks. First, the lack of knowledge about this matter leads an inexperienced IT security staff to take dubious and obviously inappropriate measures (see Diagram 4 below). Over-provisioning of bandwidth is not so expedient when it comes to dealing with application-layer DDoS and their usually low appetite for bandwidth destruction (Verisign, 2012). Instead, the defense mechanisms here should gather strength around other vital resources: memory, processing power, disk space, I/O, and upstream bandwidth (Abliz, 2011).

Second, imagine the typical picture of hectic apps creation from the developer's and vendor's point of view—often a low budget, pressing deadlines, many different tests (e.g., functionality, load, security, etc.). Eventually, it hardly comes as any surprise that the quality of applications themselves, and especially their security inviolability, is maimed (Surace, 2013). These two factors, in concert with others, create weaknesses that black hackers, inter alia, may exploit. In a situation like that, detection and mitigation seem like the only parts of a common but thin bulletproof vest that protects vital organs that the industry needs to function well.

DETECTIÄN

Why Is It Difficult?

Prima facie, there are several reasons to create a covering smoke screen when an assault takes place:

  • Network DDoS detection methods are unable to be on red alert for application DDoS attacks, since they belong to another type of layer (Prabha & Anitha, 2010).
    • Layer 7 DDoS raids based on HTTP requests are most likely not to be detected by TCP anomaly mechanisms because of the existence of successful TCP connections (Prabha & Anitha, 2010).
    • One thing leads to another—establishing TCP connections necessitates a provision for legitimate IP addresses and IP packets, a driving force that would handicap the anomaly detection contrivances for IP packets (Prabha & Anitha, 2010).
    • The absence of intense traffic spikes is a characteristic. Layer 7 DDoS consumes far less bandwidth, requires lower attack capacity, and the traffic appears benign, leading victims to suspect more trivial reasons such as system failure or application issues (Manthena, 2011).
    • To continue the vicious spiral, the relatively calm traffic screen during most application layer DDoS attacks makes it difficult to distinguish them from so-called "flash crowds," i.e., sudden increases in requests made by legitimate users. Moreover, Layer 7 DDoS attacks resemble normal web traffic, as one security expert explains: "Layer 7 attacks are tough to defeat, not only because the incremental traffic is minimal, but because it mimics normal user behavior (Chickowski, 2012, par. 4)."

Traffic Monitoring

Not only are DDoS attacks nowadays delivered via infected machines and proxies, but attackers also often utilize highly automated tools. Therefore, an analysis of telltale signs, such as headers imitating normal browser headers, with a high rate of unnatural user agents, is advisable. In addition, the header order may be abnormal and not in accordance with the usual browser behavior (Imperva, 2012).

Yet perhaps the most important thing to remember is that a DDoS offensive, let's say of the HTTP type, usually has a string or pattern (even if not easily discernible) that could be used to sort out attacking requests from legitimate ones. This might be, for instance, identical user agents employed by the attacking script, a mutual GET URL or POST request, or other common HTTP header parameters (http://www.tech21century.com, 2012). In this regard, the general belief among DDoS security firms is that "most attack tools have some unique HTTP characteristics that can be extracted and provide a basis for detection (Imperva, 2012, p. 15)."

It is important, however, to stress that an analysis of such attacking HTTP requests in the context of the entire session only (IP/session/user; URLs, headers, parameters) may disclose the big picture of an act that actually constitutes a DDoS attack (Imperva, 2012).

Distinguishing Between Legitimate Traffic and Layer 7 DDoS Attacks

As already mentioned, distinguishing between legitimate and malicious requests is the master key that would unlock our Enigma code, subsequently leading to positive DDoS detection and perhaps proper mitigation. The usual course is comprehensive traffic monitoring based on predefined traffic behavior profiles. These profiles are created as a result of repeatedly measured user-web site/application interactions, shaped and stored as statistical data (Miu et al., 2013). The measurements in question are:

  • Measurements Based on Connections to Server

Basically, every server could follow and gather information of several units of measurement, called statistical attributes for the purpose of our discourse. The logic is simple: These statistical attributes are being accumulated gradually and recreated in reference profiles that may prove handy when the traffic is suspicious. The mold thus created is used to rule out abnormal connections that tend to exploit vital applications or server resources. Statistical attributes of importance are:

Request Rate and Download Rate

The number of requests or bytes downloaded by users within a given time interval.

Uptime and Downtime

Uptime is a figure that measures the duration of a user session, that is, the time of the user's connection to server until the moment this connection is terminated. Conversely, downtime is the interval of time when a user remains in latent mode, being disconnected in other words, until he is back online and in touch with the server. Statistical data that shows the proportions of individual protocols, length of average sessions, and frequency dissemination of TCP flags may also contribute to filling out profiles (Miu et al., 2013).

Browsing Behavior

This value normally hinges on 1) the structure of the website; 2) the behavior of users. Structurally speaking, most sites consist of many web pages organized hierarchically via hyperlinks. Hence, page popularity, hyperlink access, and clicks (see Diagram 1), for example, might be statistical information worthy of remark.

Demographic Profiling

Undoubtedly, visitors from various parts of the world display heterogeneous behavior. Likewise, particular network destinations seem to cater to a specific group of clients. A surge of visitor traffic coming from Poland, for instance, to a website written in Russian would ring a bell in the DDoS department (Miu et al., 2013).

Once enough information has been gathered, the server can establish the cumulative distribution function (CDF) pertinent to normal users for all of the statistical attributes enumerated.

  • Measurements Based on User-Application Interaction Monitoring

Another origin of statistical data that may help to develop a database with which to monitor traffic anomalies and predict attacking trends is the user-application interaction. Here the statistical attributes are:

Access Rate Over Time

This corresponds to uptime and downtime in the server section.

Rate per Application Resource

A counterpart to browsing behavior, except for the fact that, being a desirable aiming mark, the concentration of surveillance effort would be probably fixed on the application login page.

Geographical Locations

This is a reference figure that has the same function as demographic profiling. Again, web applications usually service a particular group of users and web administrators may collect their users' locations.

Response Latency

An indicator of when an application resource is being exhausted.

Rate of Application Responses

According to the way the application is utilized, the rate of application responses such as 500 (application errors) and 404 (page not found errors) will vary.

(Katz, 2012)

  • Measurements Based on Resource Consumption

Other data sources for calibration are the state, allocation, and consumption of resources, such as CPU, memory, application threads, application states, connection tables, and more. A real-time awareness of resource consumption could be critical, since low and slow DDoS attacks strive to deplete them. An example of that would be if one beholds numerous prolonged, relatively "idle" open network connections that heavily drain resources, which might constitute an implication that the server is under a connection table misuse DDoS attack, perhaps one typical of Slowloris. In addition, an indicator of R.U.D.Y attack might be an application that lingers while processing a task that should normally be completed in an instant (Kenig, 2013).

Anomaly Detection

Now that the profiles are ready and the necessary databases are stacked up and available, the detection of Layer 7 DDoS will be based on any indexes that deviate from what is expected (see Diagram 1, which reads off DDoS attack traffic based on rate per application resource, browsing behavior attributes, and other relevant data in order to detect anomalies and discriminate between human beings and "good" (yellow) /"bad" (red) bots (Chai, 2013).

Diagram 1

Another example will shed some light on how this framework might successfully spot anomalies through the statistical attributes, demographic profiling and geographical locations:

  1. Under normal circumstances, the IP address distributions of legitimate users and malicious actors do not match.
  2. Usually, the IP addresses of legitimate users are uniformly scattered across the Internet.
  3. On the other hand, the distribution form of the attacking IP addresses in most DDoS attack cases is so cumulative in zones as to appear as clusters (see Diagram 2), owing to the fact that the adversary manages to hijack a number of PCs in the same LAN or area; for instance, capturing many computers deployed in a university after bringing down its central firewall installed on the gateway.
  4. Diagram 2

    Before DDoS Attack After DDoS Attack

       

    1. Bear in mind that geographic profiling and IP address distributions screening is not a panacea because, even though it's rare, bots could be evenly distributed as well or/and having spoofed IP addresses.
    2. (Beitollahi & Deconinck, 2011)

      Anomaly Detection via Algorithm and Blacklisting

      With regard to other measurements from the statistical attributes pack, a rate-shaping algorithm that exerts surveillance over clients might become productive by assigning values, tokens, trust units, or whatever name this indicator has. Essentially, it would ensure that clients "request no more than a configurable number of objects per time period…If the client requests more than the configurable number, the client's IP address is blacklisted for a specified time period and subsequent requests are denied until the address has been freed from the blacklist (MacVittie, 2008, par. 10)."

      Blacklisting

      Generally speaking, blacklisting is a simple countermeasure, a kind of short-circuit mechanism, which on the basis of IP addresses blocks suspicious or overtly malicious users. The ban, however, is not ultimate, as IP addresses may be dynamically assigned and bot computers can be remediated. Whitelisting, conversely, preapproves traffic status coming from specific IP addresses for a given period of time or amount of traffic volume. It is the digital equivalent of probation (Miu et al., 2013).

      Progressive Challenges

      CAPTCHA puzzles

      These challenge-response tests are the pinnacle and most promising instrument in the fight against Layer 7 DDoS. Especially effective against brute-force attacks, it authenticates and whitelists users after they respond successfully to a random personal challenge via a technique dubbed optical character recognition (OCR) (Beitollahi, H. & Deconinck, G., 2011). Nevertheless, CAPTCHA has its own negative qualities and weaknesses:

      • Several reports shows that is not user-friendly, i.e., it causes irritation due to its heavy-handed nature (Miu et al., 2013).
      • Its rather intrusive character condemns CAPTCHA as defective, so it is applied sparingly (Miu et al., 2013).
      • Although CAPTCHA is tough to beat, there are a number of ways to "challenge the challenge"—breaking techniques including re-using a past session ID of a known CAPTCHA image, or a so-called labor attack, in which cheap third-party human labor is used to solve the OCR (Beitollahi, H. & Deconinck, G., 2011).
      • If the attacker overcomes this challenge, he receives a whitelist status as an award (Miu et al., 2013).
      • JavaScript Authentication

        A portion of JavaScript code built into the HTML is sent to suspicious clients. Presumably, only those who have a full JavaScript engine could initiate the computation.

        HTTP Redirect Authentication

        Artificially redirects HTTP 302, which discriminates legitimate browsers from automated tools.

        HTTP Cookie Authentication

        Here the cookie capability of the clients' browser is put to the test.

        Equipment and Services for MITIGATIÄN of Layer 7 DDoS Attacks

        1. Non-Specialized DDoS Equipment and Services

        Firewall, IPS, IDS

        These protection methods have a role that has more to do with preventing intrusion attempts than DDoS attacks. For an effective DDoS mitigation, an apparatus must possess a "big picture" capability to keep track of and analyze all sessions and not produce a sporadic session-by-session analysis, as this trinity does. To put it another way, these devices do not have anomaly detection capabilities (Radware, 2013).

        One more nail in the coffin of these apparatuses is the fact that they are deployed in-line and too near to the server and they can suffer from resource exhaustion themselves. Furthermore, Layer 7 DDoS raids leverage a firewall weakness that enables the omission of both legitimate and illegitimate protocols and application through the standard practice of opening services as HTTPS (TCP port 443) and HTTP (TCP port 80, e.g., Code Red virus) (Prabha & Anitha, 2010).

        Because of the way they function, Fs/IPS/IDS open a new connection in their connection tables for every DDoS malicious packet; in the end, this results in the depletion of connection tables, leading, in turn, to denial of service to legitimate users (Radware, 2013). Therefore, these appliances accounted for 1/3 of the outage and bottleneck service disruptions in 2011 (Herberger, 2012).

        Diagram 3

        Devices Contributing to Availability Problems in 2011

        Web Application Firewalls (WAFs)

        Designed specifically to protect websites and web applications (tech creations such as JSP, PHP, Perl, ASP, ASPX, and other common gateway interfaces), this kind of technology operates in a fashion similar to IDS and IPS, by inspecting both ingress and egress traffic to websites and information portals for signature threats, anomalies, and data leakage (for example, SQL information or credit cards), and they possess engines that learn from normal traffic patterns (http://www.osisecurity.com.au).

        Moreover, by comparison with standard firewalls, a WAF provides superior content filtering and granular control over the incoming traffic, as well as advanced protection against malicious traffic that exploits each "allowed port" (Cobb). Despite all these pros, some security experts are skeptical concerning the resourcefulness of this technology, citing as reasons the difficulty of coping with increasing latency, numerous false positives, and inappropriate network location (Jirbandey, 2013), (Radware, 2013).

        Diagram 4

        An illustration of how DDoS specialists are in deficit over general IT "polymaths."

        1. Dedicated DDoS Protection Equipment and Services

        1. On-Premise Hardware

        Pros: Specially designed and dedicated to detect and mitigate DDoS attacks, this type of appliance is usually deployed as the first device in the network, before the access router. The protection procured is automatic and immediate, and this apparatus is fine-tuned to filter well malicious Layer 7 traffic (Radware, 2013).

        Cons: This on-premise hardware is costly, it requires trained security engineers, and it is dependent on regular updating. Additionally, it cannot effectively handle volumetric attacks (Leach, 2013).

        1. Cloud Mitigation Provider

        Benefits

        Usually provided by teams with great expertise in the field, this service has as its main advantage massive amounts of bandwidth in store. Before ever reaching the customer, cloud mitigation works by redirecting poisonous traffic via the border getaway protocol (BGP) to providers' scrubbing centers. Then the traffic is "absorbed," "cleansed," and reinjected on its way to the customer (see Diagram 5).

        According to some security experts, "cloud mitigation providers are the logical choice for enterprises' DDoS protection needs. They are the most effective and scalable solution to keep up with the rapid advances in DDoS attacker tools and techniques (Leach, 2013, par. 14)."

        Diagram 5


        Shortcomings

        Owing to the fact that cloud mitigation in its nature is devised on the basis of enormity and load protection, low and slow DDoS attacks occasionally manage to slip through the net. A matter of the cloud's desensitization to earthly problems, you might say. On top of that, only the inbound traffic ever gets inspected, which gives limited visibility (Miu et al., 2013).

        1. Hybrid

        As Radware suggest:

        Hybrid DDoS solution aspires to offer best-of-breed mitigation solution by combining on-premise and cloud mitigation into a single, integrated solution…The hybrid solution also shares essential information about the attack between the on-premise mitigation devices and the cloud devices in order to accelerate and to enhance the mitigation of the attack once it reaches the cloud" (Radware, 2013, p. 6).

        Content Delivery Network (CDN)

        CDN providers deliver website content on behalf of customers around the globe. Owing to the dispersed server distribution of CDN providers and the size of their structure, it is vastly more difficult for DDoS attackers to overwhelm them (G.F., 2013). In terms of structure, a CDN provider might use a tiered CDN server or proxies as to store and direct the content (Zadjmool, 2013).

        Criticism

        The problem with this approach is that backend typically trust the CDN unconditionally, making them susceptible to attacks spoofing as traffic from the CDN. Ironically, the presence of CDN can inadvertently worsen a DDoS attack by adding its own headers occupying even more bandwidth" (Miu et al., 2013, p. 10).

        Internet Service Provider (ISP)

        As it seems convenient, some firms seek recourse to their ISP providers for DDoS mitigation. However, they should take into account that, in general, ISPs do not specialize in DDoS mitigation and they do not support cloud protection. The latter factor is important, because many web applications are now split between cloud services and private business data centers. Furthermore, choosing an ISP is a feasible option only if your organization is not multi-homed, i.e., using several ISPs (Leach, 2013).

        Two unconventional techniques

        Darknets

        The specificity of this mitigation technique is that, unlike others, darknets should be set up before the DDoS event has commenced. In order to set up this preparatory defense measure, a customer must purchase a range of IP addresses–for instance, 175. 346.72.101-105. In this configuration, let's say that the 101, 102, 104, and 105 array of IP addresses are allocated to the web servers that support the customer's website and 103 is not, because it is kept as a darknet (Access, 2012).

        Without knowing about the existing darknet IP address, a malicious actor may engage in offensive DDoS activities directed against the entire set. Subsequently, any traffic that hits the darknet is malicious traffic. On account of this fact, the security experts that try to alleviate DDoS aggression could extract the client IP addresses affecting the darknet and blacklist them, which will eventually reduce the overall attacking IP addresses and weaken the onslaught (See Diagram 6).

        Diagram 6

        "Lite" Sites

        To go "lite" after a service has been impacted by Layer 7 DDoS, inter alia, is a rather innovative option. This mode represents a static counterpart of the dynamic web content, which could be put into action into emergency situations to display an appearance of availability. It seems that this concept is particularly fruitful against application layer DDoS that seek to saddle dynamic web services (Zadjmool, 2013).

        Mitigating Specific Layer 7 DDoS Attacks and Weapons

        • HTTP POST DDOS

        Rate-limiting is enforced by classifying and keeping a watchful eye on the speed performance and size of each request, subsequently limiting the amount of extremely slow connections per CPU core and above the maximum allowable body (The OWASP Foundation, 2010), (F5 Networks, Inc. 2013).

        • HTTP GET DDOS

        First, IIS web servers or those having timeout limits for HTTP headers are usually not susceptible. Second, load balancers, reverse proxies, and Apache modules, such as mod_antiloris, might repel this kind of cyber attack. Third, measures such as "delayed binding" / "TCP Splicing" are feasible alternatives as well (The OWASP Foundation, 2010).

        • Low-Orbit Ion Cannon

        Hackers utilizing this tool are forced to use anonymizers because of the fact that it passes on their real IP addresses otherwise. On this account, blocking all agent nodes associated with an anonymizing service (e.g., Tor) should be considered the first step, and not in regard to LOIC only (david b, 2012). In addition, "mobile LOIC pages can be reused against more than one target, so having a list of malicious referrers might also be beneficial" (Imperva, 2012, p. 12).

        • Slowloris

        Renowned for its low rate and long period between headers, Slowloris may be countered with load balancers and by switching to a Microsoft-based server platform, as MS IIS under normal circumstances is not vulnerable to attack from this tool (Access, 2012).

        • Slow Read

        The proposal here is to configure the server to disregard connection requests having a window size that is abnormally small (Imperva, 2012).

        For the sake of thorough detection-mitigation defense, an implementation of other mechanisms countering DDoS—for example, progressive challenges (CAPTCHA or Java injection)—might contribute significantly to warding off these attacks and fortifying the overall defense of the targeted entity.

        Conclusion

        In spite of the long list of detection and mitigation measures against Layer 7 DDoS presented here, there is no universal, all-cure, magic glue that can stitch up all security patches and holes in a given system. Besides, every particular case has its own specificity. Nevertheless, making the holes smaller is what these mechanisms are for. In the end, if the road to DDoS success is long and winding, a road to perdition, so to say, the wrongdoer might start to feel discomfort — sweaty palms, disheartenment, and paranoia that he will get stuck in some of the holes, falling prey to its own game.

        Reference List

        Diagrams

        Dimitar Kostadinov
        Dimitar Kostadinov

        Dimitar Kostadinov applied for a 6-year Master’s program in Bulgarian and European Law at the University of Ruse, and was enrolled in 2002 following high school. He obtained a Master degree in 2009. From 2008-2012, Dimitar held a job as data entry & research for the American company Law Seminars International and its Bulgarian-Slovenian business partner DATA LAB. In 2011, he was admitted Law and Politics of International Security to Vrije Universiteit Amsterdam, the Netherlands, graduating in August of 2012. Dimitar also holds an LL.M. diploma in Intellectual Property Rights & ICT Law from KU Leuven (Brussels, Belgium). Besides legal studies, he is particularly interested in Internet of Things, Big Data, privacy & data protection, electronic contracts, electronic business, electronic media, telecoms, and cybercrime. Dimitar attended the 6th Annual Internet of Things European summit organized by Forum Europe in Brussels.