DDoS, or Distributed Denial of Service, is a cyber-attack in which an attacker tries to bring the functioning of a computer system that provides a service, such as a website, to the limit of its performance, generally working on one of the input parameters until it is no longer capable of delivering the service. This technical definition often results in a significant loss of money, caused by the inability of a hypothetical company to profit from their online business. These attacks are usually carried out by sending many packets of requests against the targeted service, saturating its resources and making the system “unstable,” thereby preventing legitimate requests from reaching their destination. These can be carried through individual or distributed real sources (direct attack), where the attackers interact directly with the victim system (victim in this case is identified as “first level victim”), or through the use of third-party systems, where third parties are identified as “second-level victims” and the target service as “first level victim.”
The common tactics used to defend against this threat are often limited to the adoption of an oversized capacity of network bandwidth (in relation to estimates of peacetime workload) and the implementation of network devices such as firewalls and/or IPS (Intrusion Prevention System). Like many companies have already experienced, these practices are often insufficient to ensure a good level of protection, which translates, to the benefit of the attacker, with a more or less prolonged downtime of the service targeted.
However, it is possible to specify a set of “best practices” that can be used in order to minimize the potential effects of a large-scale DDoS attack against our network as well as attacks that target application flaws or protocol weaknesses. This document, in fact, will not focus on how to make a (D)DoS attack, but on how to defend against it through a series of “best practices” to adopt. It is also made clear that the single “0-day Nuke Packet” cannot be predicted by this document.
It’s possible to place the birth of (D)DoS attacks with the birth of the Internet itself. Over the years, this type of attack has undergone big changes. Initially, the majority of these were aimed more to the saturation of network bandwidth and, generally, targets were chosen randomly by attackers (attack for fun). Today, the statistical trend predominantly sees mixed types of attacks conducted at the same time against a single target (such as SYN Flood, Reflected DNS, ICMP Echo Request, and HTTP GET Flood), and sometimes, in front of a really small volume of traffic, only specific application types which mimic legitimate traffic and few packets are sent.
Also, the motivations to accomplish them have changed. Today, the reasons why someone performs (D)DoS attacks are varied. They can be used as a threat in order to extort money from those organizations that are not able to cope with them, or as a means of protest for activist groups (see “Anonymous” for more details, and the tendency to seek crowd-sourcing to perform DDoS).
In addition, it’s important to consider the evolution of equipment in the possession of “bad guys.” In the past, the technical skills required to accomplish a DDoS attack were certainly more stringent, and presupposed a good knowledge of the network and its “weak points.” Also, the supply of providers for home connections was quite limited. In the present day, even individuals with minimal technology skills can orchestrate a (D)DoS attack. Currently, a botnet capable of a 10-100 GBps attack can be rented online for 24 hours for the price of about 300 dollars, while in the past these were for the almost exclusive use of those with the knowledge and technological skills necessary to create it and manage it.
Also the botnets’ technical details have undergone considerable mutation over the years. A type of botnet even more widespread is based on IRC protocol. Called the “old school botnet“, it has a big, single point of failure: the IRC server. Once the IRC server is closed, the attacker loses control of the bots. In 2007, we witnessed the birth of a new type of malware specifically designed to perform DDoS attacks using the P2P protocol. This is the same protocol that many programs use to download music and movies; due to its decentralized architecture, tracking and containment operations are much more difficult compared to a centralized network architecture.
Ineffective Tactics of Defense
Generally, more and more organizations become concerned about the increasing threat of (D)DoS attacks, yet only a few of them have adopted effective and efficient countermeasures. Often, the solutions fielded by these do not have the ability to effectively combat the phenomenon because they are unable to mitigate the threat before it reaches their network. These, usually, are limited to the following:
Over-provisioning of bandwidth
Over-provisioning of bandwidth is one of the most common and adopted anti-DDoS measures. Some companies also require to ISP a level of network bandwidth of 100% higher than the real prospects of use. However, it is not difficult to find resources for attacks that are capable to saturate even faster connections. Bandwidth-based DDoS attacks today are capable of carrying out more than one million packets per second, so even the well-provisioned networks can be knocked down. In addition, there are other points to consider. Over-provisioning of bandwidth cannot in any way be a remedy for application layer attacks and therefore is an ineffective solution that can only lead to limiting threats at the network layer.
Adoption of perimeter firewalls
With current technologies, perimeter firewalls are not enough to manage a large-scale attack carried by botnets or reflectors. Using a firewall to mitigate (D)DoS attacks often results in a CPU utilization spike and to the rapid consumption of its memory resources. It’s easy to imagine that, when this happens, all this translates to the complete unavailability of the service that we are trying to protect. The firewalls also do not present any ability to notice anomalies about the network traffic (ADC – Anomaly Detection Capabilities).
Intrusion Prevention / Detection System
An IDS system is often located behind the perimeter firewall. This type of device is designed to control every single packet in transit, looking for signatures that might indicate an attack in progress (Intrusion Attempt). This type of device, like the IPS, is not designed to handle large volumes of traffic. Using these can mitigate DDoS attacks, but even in this case, it would certainly impact against their performance and, consequently, against the availability of the services that we are trying to protect. IPS devices are able to block certain types of DDoS attacks on the basis of their databases of signatures. However, they are limited in the management of simultaneous TCP sessions and in the amount of bandwidth that they can handle.
Routing to Black-Hole
Route to black-hole an IP address or to one or more subnet of IP address can effectively help a lot in the mitigation of DDoS attacks, but may also lead to wasting legitimate communications together with malicious ones. Also, this solution often leads to a partial economic loss caused by the inability of some legitimate communications to use the service.
Black-Hole is usually done with help of an ISP. As an example of “mitigation through black-holing,” a customer with about 20 GBps of incoming DDoS traffic decides to throw in a “Black-Hole” to any communication from foreign IP addresses, temporarily losing, in this way, the ability to provide its services out of its national borders.
Rely on ISP without a clear agreement
Some organizations rely on some kind of “tacit agreement” with their ISP. They are convinced that the ISP itself will provide some sort of “malicious traffic mitigation,” and therefore the problem is not theirs. In reality, many ISPs are able to manage a very effective service for DDoS Mitigation, but in order for them to provide a third-party mitigation service, they are required to discuss details about service level agreements (SLA), details regarding the network bandwidth capability and the limits of containment, details about the time and methods of mitigation (Reactive or Proactive?) over the type of attacks that the provider is able to counter and of which the customer should be informed. In the event of a massive attack, however, the ISP can still attend with an effective solution to mitigate bandwidth-based DDoS attacks, but they are not obliged to do so, and will surely give priority, if needed, to customers with a formal contract for DDoS Mitigation service.
Effective Tactics of Defense
As with many other branches of Information Security, the awareness of what is happening is of paramount importance. At the basic level, in order to effectively mitigate a (D)DoS attack, it is very important to know where and what to watch. Therefore, it is very important to create and develop a control center that will allow you to check anomalies on your entire network traffic at any time. For this reason, a baseline for normal network activity should also be established so that you would be able to note much faster the presence of malicious traffic. This baseline should predict the exact knowledge of what protocols are usually used in incoming and outgoing communications (HTTP/S, SMTP, FTP etc. etc.) and exact events that occur at specific days or dates (for example, an increase in outgoing traffic for backup activities). It’s also useful to generate statistics about the historical evolution and trends of the network to protect and compare it with the current pattern.
Situational awareness also means planning to be prepared about global cyber-threats so that we can evaluate them in advance and face them in the best way when they arise. Be sure to develop a specific alerting and reporting system, especially for those attacks that are not volume-based. We need to reach the point of being able to collect detailed data about the traffic in transit and to be able to quickly generate reports and statistics about the protocols and traffic summary. In this regard, it is important to have the cooperation of capable security analysts and researchers with real hands-on experience in the field, who are able to quickly differentiate threats from legitimate traffic and who can quickly change the defensive strategies if necessary.
Clear Incident Response Process
In three letters, this means an SOP (Standard Operating Procedures) for the management of security incidents. The development of the “SOP” should identify the vital systems to our business and, on this basis, develop specific contingency plans, which will then be tested to face a short-time, medium-time and long-time downtime. We also can not overlook the creation of a team able to deal with any incidents or emergencies (Emergency Response Team) plus a clear definition of escalation paths with the contact numbers of anyone who might be impacted or interested by the anomaly. In summary, the ERT should establish procedures about:
– Who should be notified during an attack?
– What data needs to be collected during an attack?
– What measures should be adopted to protect the infrastructure or service?
– What is the escalation path for critical decisions?
Finally, we should be aware that if we rely on an ISP for the mitigation of DDoS attacks, the service may be provided through shared platforms, limiting the effective capacity of filtering malicious traffic during simultaneous attacks to other customers, and consequently increasing the chances of a heavy “DoS” impact against the infrastructure that we want to protect.
Layers of Protection
The main goal of a (D)DoS mitigation is to drop any unwanted traffic while allowing legitimate ones. The best way to achieve this is to control traffic at different levels of inspection. This implies some slightly more technical considerations: because we have to inspect the packet searching for network, transport or application level anomalies during communication activity, it is necessary to provide for an inspection through a signature and a behavior analysis (based on previous statistics collected in peacetime) and at the same time, through anti-spoofing and session validation algorithms, maintain, a low delay during the routing of legitimate communications even in the presence of large-scale attacks. In a nutshell, this translates into a traffic control on multiple levels of the OSI stack, because, while the simplest attacks can be mitigated through the application of filters at the network or transport layer, complex attacks require the implementation of filters and the inspection of anomalies at the application level of OSI model. To all this is added, as already mentioned above, the ability to quickly apply effective traffic filters, in addition to the ability to change mitigation strategies in a short time, that only prepared security personnel with an extended experience in the field can guarantee.
Capability, Scalability and Flexibility
Capability, Flexibility and Scalability refer, in a nutshell, to the ability of the organization to ensure the proper operation of its infrastructure even under massive attacks. This includes bandwidth as well as the hardware processing power required to process the traffic traveling over the network. These particularities, unfortunately, are often difficult to obtain due to economic or organizational reasons of many companies. Regarding “Capability” for example, getting extra bandwidth to cope with attacks (and also to implement mitigation systems) often represents a significant economic investment, which is more daunting in view of the fact that nobody can know at first whether such an investment will still be enough to combat threats which grow more powerful over time.
It’s also important to have a good knowledge of the weaknesses of our infrastructure because if you are victim of a targeted attack, the attackers probably already know them. For example, it’s useful information to know at which point a network device (like a firewall) stops working due to intense network traffic. In all cases, it is vital to make sure that the monitoring systems continue to function and to report correct information during all phases of an attack as it is vital to create and maintain redundancy for critical services and applications.
Understanding Network-Layer Attacks
The following does not want to be a complete list of network-level attacks and DDoS methods, but it summarizes, quickly, what the common threats are today:
TCP SYN Flood
This is one of a type of attacks that try to exploit the operating characteristics of state-full network protocols, as these consume resources to maintain states of legitimate communications. When a client attempts to establish a TCP connection to a server, the client sends first a packet with TCP SYN flag set to “1” (active) to the server. The server then acknowledges by sending a TCP SYN-ACK message to the client. The client completes the establishment by responding with an ACK message. This process is called a “Three-way handshake.” It’s possible to exploit this mechanism by sending multiple packets with TCP SYN flags active (SYN Flood), avoiding the closing of communication by sending the final ACK packet. At this point, the server needs to allocate memory for storing the information of all the half-open connections.
This attack can be easily carried out by creating half-open connections via spoofing of source IP address or ignoring SYN-ACK packets received by the server. The consequences are that soon the server will exhaust its available resources to keep track of all these connection requests and it will no longer be able to deliver the service correctly.
TTL Expiry Attack
Behavior on TTL Expiry in a packet can create a DoS condition against network equipment. This occurs because when an exception packet is encountered, such as those with expiring TTL values, varying amounts of effort are expended by a router to respond appropriately. Under massive attack conditions, where the number of TTL expiry packets is large, high CPU utilization may result, leading to network instability. For this type of attack, it’s recommended that organizations apply filters for incoming packets with low TTL values for untrusted-to-trusted traffic.
These attacks are difficult to trace because the flood packets are sent by intermediate reflection media. This attack involves sending packets to the reflection media with spoofed source IP so that their answers to these requests are reflected to the main target. In this case, the speed of reflection media is very important. This method to carry out DDoS attacks is now widely used because it has obvious advantages: the first is that the reflection media considerably increases the scope of the attack due to the amplification of the packets sent by the original source (sometimes > 8x); the second is the reflection media provides anonymity to the original attacker’s network location. The picture below shows one of the most classic and timed reflection attacks, the smurf, and it’s good to understand the logic of this threat. (The fraggle is a variant based on the UDP protocol.)
In “smurf“, an ICMP Echo Request packet with the spoofed source address of the victim is sent to the subnet broadcast address of a network that will be used as amplifier. Today, smurf attacks are no longer a serious problem because prevention methods are already widely used.
Another example of reflection attack is TCP Amplification. In this case, the attacker sends a TCP SYN packet with the spoofed source address of the victim to an arbitrary TCP server. The server will then respond with a SYN|ACK packet that will be sent to the victim because of the original spoofing of the source IP. The amplification takes place in the moment when the server will not see the final ACK packet turning back, so it will send the SYN|ACK packet a certain number of times, imagining the previous was dispersed.
To improve the success of this type of attack, it is necessary that the reflectors do not receive a RST packet from the victim in response to their SYN | ACK. It’s easy to imagine, therefore, that this type of attack can be greatly aggravated by the adoption of firewalls that silently drop SYN|ACK packets without sending an RST. In general, the reflection mechanism, however, is generally based on different services that allow some sort of amplification of original requests. For example, if an attacker can send a request to a proxy and that proxy is going to relay requests to an address who is chosen by the attacker (victim IP), then, if the proxy is programmed to resubmit those requests multiple times if no answers are received, this can serve very well as amplifier.
For the reasons mentioned above, the use of reflectors to accomplish DDoS attacks today is very common. However, there is perhaps a more widely used attack because of its ability to guarantee very high percentages of amplification: the DNS Reflection Attack. In this type of attack, the attacker takes advantage of the fact that a small DNS query can generate a response that’s very much larger. The essential conditions to make such an attack are DNS servers with recursion enabled. These are servers that provide recursive DNS responses to anyone and to which we refer to as an “Open Resolver.”
In a DNS amplification attack, the open resolver plays the role of the source of amplification, receiving a small DNS query and returning a much larger DNS response. This, when combined (again) with the spoofing of source IP, can mean large volumes of traffic directed towards the victim. The amplification factor here, depends on the type of DNS query and if the DNS server supports sending large UDP packets. Generally, the attacker will forge a DNS request that can generate response packets as large as possible. For example, an attacker could request all DNS records for a particular area, ensuring a high level of amplification. It’s very interesting to note that the largest DNS responses tend to be those incorporating DNSsec authentication information. DNSsec adds security to DNS responses by providing the capacity for DNS servers to validate DNS responses. The particularity of DNSsec in relation to amplifications attacks is that it adds cryptographic signatures in DNS messages, making it grow in size. DNSsec also requires EDNS0 support (Extension Mechanisms for DNS). Therefore, a DNS server that supports DNSsec will also support large UDP packets in a DNS response (EDNS0 – RFC 2671).
Distributed Reflected Attacks
If many attackers (with the use of a botnet maybe) can spoof a source address, the victim can be quickly overwhelmed. Distributed reflected DoS attacks have the same principle as those exposed above, but they are a threat exponentially larger. In spite of the techniques involved being just the same, the numbers of machines involved are decidedly different and for this reason the malicious traffic is potentially much more destructive. Also this type of attacks is particularly difficult to trace because the controlling computers are two layers hidden from the packets received at the site of target. As just mentioned, usually the attacker uses remote controlled computers that will send spoofed packets (with the victim’s IP) to one or more reflective media.
Those packets will then be sent back to the real victim.
The image above shows a clear example of this type of attack and as we can see, this can be accomplished by a number of different services and protocols.
Direct UDP Flood
UDP is a very efficient protocol to conduct DDoS attacks. This protocol is not designed to have any acknowledgement mechanism. In this attack, the attacker usually sends a large amount of UDP packets against the target, using the spoofing of source IP to make his network location anonymous and ensuring that the excessive ICMP return packets do not come to him.
Packet fragmentation can be used in two distinct ways: evasion of IDS/IPS detection and as a DoS mechanism. Fragmentation is often used to exhaust system resources while trying to reassemble packets.
In contrast to these types of threats, solutions are more effective as the mitigation is closer to the source of attack.
For this reason, solutions offered by ISPs today are quite effective in this regard. However, it is possible to mention a few “on-site” mitigation methods, among which are:
TCP SYN Proxy: This is a mechanism through which it’s possible to control if a client sent the final acknowledgement ACK during a connection. The service is usually provided through a gateway that resides in front of the server, and is responsible therefore to submit only legitimate connection requests to this.
Connection Limiting: Too many connection requests can cause an overload of the server. It necessarily limits the number of requests accepted by the IP.
Dynamic Filtering: Dynamic filtering is performed by identifying misbehavior and punishing that behavior for a short-time (rejecting, for example, the following communications), so that it will not be prolonged over time.
Anomaly Recognition: Performing anomaly checks on headers, rate and state, we may be able to filter out most attack packets.
Black-Listing: Black-Listing is very useful to lower the chance of impact of a DDoS attacks. If there are ranges of IPs to which we are not interested in providing a service, it is good to keep them in a static “black-list.” Otherwise, even through the methods reported above, it’s useful to implement dynamic “black-listing” mechanisms.
Dark addresses are those IP addresses not yet assigned by IANA. Any packet coming from or going to a dark address is signs of spoofing and must be prevented.
Understanding Applications Side Vulnerabilities
As mentioned earlier, (D)DoS attacks have evolved and continue to evolve. Simple, massive floods at the network layer have become much more complex and difficult to detect, touching the application layer of the OSI stack. In complex and targeted attacks, it is not difficult to find someone who studies and analyzes in depth the exposed applications to understand their limits and operating logic to finally launch an effective attack. The defense against these threats requires the adoption of specific operations. In this case, it is essential to have a good knowledge of the situation. We need to know perfectly what a specific application does and how it does it, as well as determining the operating limits where the application becomes unstable or offline. This is useful for customizing the data stream directed towards it.
An application, furthermore, can be vulnerable to various kinds of attacks, such as buffer overflows, improper input validation, external conditions capable of causing infinite calculation cycles (development errors), null pointer dereferences, etc. A good mitigation strategy requires a deep understanding of these issues, because, usually, attacks are pointed to exploit these vulnerabilities in addition to those of protocol or service. These attacks, furthermore, can appear to be legitimate application-layer traffic and are not easily detectable. The ability to defend against them is directly proportional to the knowledge and the preparation that the security staff can put in place. Obviously, the information obtained from network devices such as IPS (Intrusion Prevention System) or DPI (Deep Packet Inspection) can help a lot in determining that an attack is taking place and, depending on its nature, deciding which different mitigation strategies can be adopted.
Focusing on the “web” the commonly used mitigation techniques are rate-limiting, prioritization, load balancing, aggressive session aging, CAPTCHA control, two-factor authentication, W.A.F. and so on. Upon adoption of these defenses, the next step would be to individually deny incoming communications for those sources for which we already have identified abnormal behavior (when possible), but it’s still good to keep in mind, in a view slightly more generic, that there are many variables able to cause a “DoS” condition. Some of these variables, for example, are so-called “exception errors” that can cause a crash of our service, or “data destruction” through a SQL Injection vulnerability for example (; DROP TABLE articles; —uhm?! this does not look very good for my e-commerce!).
Application security must therefore be considered in a generic way (based on a good model of development and best practices), and only then focus on protocol weaknesses (HTTP/HTTPS for example) to combat attacks like Slowloris or Slow POST and other application-protocol based attacks. To finish, there is no real, unique solution for mitigation of application side attacks and there would be many variables and considerations to be taken into account (web server, development technology, databases, OS, logic of service etc. etc.) that cannot be treated exhaustively in a single document. However, incorporating high security standards in the development cycle and improving the process for spreading security patches is in any case the correct route to considerably reduce the probability that a DoS condition occurs.
As mentioned in other sections of this document, the stability of our network, in relation to (D)DoS threats present today, is closely linked to the processes of management and awareness of what could happen. Although investment in terms of resources and technologies can sometimes seem unnecessary, prolonged downtime of our services (and of our online business) will almost certainly cost more both in economic and reputational terms for organization image.