Security+: Technologies and Tools - Load Balancer [DECOMMISSIONED ARTICLE]
NOTE: This article reflects an older version of the Security+ Exam – please see the current Security+ Certification page for the most up-to-date information.
The Security+ syllabus is updated every three years. Normally, the exam is denoted by a code consisting of a sequence of letters and numbers; for example, SY0-401 is the most recently outdated exam. During the revision, a number of changes are made from the previous to the most recent exam. This article covers the most recent changes leading to the current exam, SY0-501. We will cover the most recent changes related specifically to load balancers. We will take a look at how this is covered in the new exam and how it differs from the previous exam.
Exam Changes Overview
Between the two exams, SY0-401 and SY0-501, there is a significant overall change in content. The new exam focuses on attacks, risk management, and hands-on skills using technologies and tools. As a result, the domains have been re-named and re-ordered to reflect cybersecurity trends as determined in the Security+ SY0-501 Job Task Analysis (JTA).
Under the previous exam (SY0-401), load balancers were covered in the Network Security domain, specifically under section 1.1 Implement security configuration parameters on network devices and other technologies, which covered 20% of the overall exam, but is now found in the Technologies and Tools domain (22% of the overall exam), under section 2.2 Install and configure network components, both hardware and software-based, to support organizational security.
Compared to the previous exam, the new exam ensures that candidates are able to explain the learned concepts by translating them to real-life problems. This has been achieved by a 21% increase from the previous exam.
Exam Changes Comparison
A day in many large organizations with a large client base could see a tremendous amount of traffic streaming in from all around the world. How are such organizations able to handle such massive amounts of traffic without a denial of service being caused on their infrastructure? Enter Load balancers! A load balancer effectively splits the load of traffic among multiple servers within the organization so that the same responses are served to clients without causing service downtime. On March 1, 2018, the popular online version control code hosting platform GitHub was hit by the largest distributed denial of service attack to date, measuring an astonishing 1.35Tbps. This is just one of the many DDoS attacks targeting online platforms and organizations, and it is due to this that organizations are employing different mechanisms to deal with large traffic.
The following comparison between the recently outdated and new exam covers the changes made covering load balancers.
The previous exam covered the various methods of distributing load across multiple clusters. Depending on the particular metrics, distribution could be based on:
- Load—For instance, how much traffic can be allowed per server within which designated cluster?
- Content—For example, distribution could determine which cluster handles video, images, files, audio, etc.
Candidates were required to show how each of those methods was susceptible to attacks and how the attacks could be prevented. Let’s say a couple of servers were vulnerable to the MS17-010 Eternal Blue vulnerability. Attackers on the Internet could easily exploit one server to gain full access to the internal network and thus compromise the entire organization. Or let’s consider an instance where attackers redirect traffic to malicious sites after compromising a load balancer. They could use this attack to lead to another attack and so on. Candidates were therefore required to demonstrate the ability to apply patch system patches and latest hotfixes and antivirus updates to provide server security.
While the previous exam was centered on examining the load balancers to ensure they were secure, this exam takes things a step further. Candidates are now required to master how to configure load balancers for fault tolerance. This simply means that now intelligent load balancers ensure that web server and database farms that they communicate with are online and do not go offline as a result of a high amount of load they cannot take. In this case, the load balancer shifts traffic to less overwhelmed servers.
Candidates are examined in the following areas:
- Configurable load—The ability to configure load balancers to distribute load to multiple servers is examined.
- TCP offload—Candidates must be aware of the techniques for reorganizing and redistributing TCP traffic effectively within the servers to make the communication within the network more efficient.
- SSL offload—Candidates must also know that SSL encryption and decryption adds a significant overhead on the servers and so, instead of having encryption and decryption handled on the individual servers, intelligent load balancers that are able to effectively handle the process can be acquired.
- Caching—Candidates must be aware of caching capabilities of some load balancers and this can significantly increase response times. For instance, instead of requesting some information over and over from servers, some load balancers are capable of locally caching that information.
- QoS—Some load balancers allow for the configuration of quality of service such that some applications/web-servers/databases may get priority over others based on certain criteria. Candidates will be required to know which situations would be ideal for this type of configuration.
- Content Switching—Some load balancers are able to take an application and load it across multiple servers; for instance, let’s say one application loaded on ten servers that are configured to handle that type of traffic and a different application on different servers with different capabilities of handling the data as well.
Candidates must now be aware of the way traffic can be scheduled to go to different servers that are behind a load balancer. Let’s examine a few:
Round-robin—Each server is selected based on the type of traffic traversing the network. Weighted round-robin allows a server to receive twice as much or half as much traffic as the others. Dynamic round-robin monitors the traffic going to the servers so that, if one server is more loaded than the rest, the load balancer uses the other servers first.
Active/Active—This ensures that servers are active within the network and any request coming in will get to use any server of choice.
Affinity—Some applications are built to communicate in real time with a web server. In this case, the application doesn't understand how to communicate with multiple web servers, so a property of some load balancers called affinity is applied. In a nutshell, a load balancer uses a particular web server for a particular user/application. By doing so, the load balancer assigns the affinity of the application/user to the corresponding server for the session until it expires.
Active/Passive—Here some servers are configured to be active while others are set to be on standby. In this case, if anything were to happen to cause the active servers to fail, the load balancer detects this and the standby servers kick in.
The new revision of the exam makes it practical to deal with load balancers so that the different configurations that enhance security are examined. This time around, the exam goes a step further to consider traffic flow even within the network to ensure that security is emphasized from all angles.