Web server security: Infrastructure components
Cybercriminals understand that your website is not only the face of your organization, but often also its weakest link. With just one misconfigured port, malicious spearphishing email or unpatched vulnerability, an attacker can deploy a range of techniques and tools to enter and then move undetected throughout a network to find a valuable target. Once found, data can be exfiltrated, modified, deleted or all of the above, depending on their motive, all while blending in their movements with legitimate network traffic.
All of this is enabled through web servers, which makes these devices not only vital to communication but also your organization’s security posture. However, because by their very nature web servers sit near the edge of your network, they are designed to be accessed and pinged, sharing at least basic information about your organization to anyone in the outside world.
Learn Network Security Fundamentals
Learn Network Security Fundamentals
To continue the Infosec Skills series on web server protection, this article focuses on the infrastructure components that can be deployed to keep attackers at bay, monitor for malicious activity, log session activity or even stop cybercriminals in their tracks. While our review is by no means comprehensive, we will look at some of the most commonly used tools that contribute to web server hardening.
The infrastructure components of web server protection
A firewall is a device configured to protect and isolate an organization’s internal network from external traffic, allowing only specific connections to pass through well-monitored ports and pre-defined rules. Firewalls can be implemented in either software or hardware form and can control traffic flow both in- and outbound based on specified criteria, such as IP addresses, time ranges or the type or destination of a network request.
In a sense, a firewall is often considered the first line of an organization’s cyber defenses. However, they fall short in their ability to perform other functions such as authentication, packet inspection and content monitoring.
Firewalls: Placement and configuration
The placement and configuration of the web server itself is also a key security decision. One school of thought is to just “lock down” a web server to only allow two services accessible to the outside world (port 80 running HTTP and port 443 running HTTPS) and place the web server outside of an organization's enterprise network firewall. All other incoming connection requests would then be rejected, while port 80 and port 443 traffic would then flow through other security products and firewalls for additional filtering.
Alternatively, a web server can also be “placed” and configured behind another firewall to create a Demilitarized Zone (DMZ) between an organization’s internal local network and the external internet. Within this DMZ, an organization’s external-facing devices and data (including the web server) would be accessible from the internet, but the additional services would be further protected with a second layer of firewall filtering.
An advantage of having a web server outside of a firewall is that the device is going to be receiving many inbound connection requests from many unknown sources, regardless of the presence of a firewall. In the event that an attacker compromises the web server, they are still blocked by the enterprise firewall. In other words, an attacker would not have gained an additional foothold from which to expand access within a subnet of your network as is the case in a DMZ implementation.
Intrusion detection and prevention systems
When it comes to active web server defense, two of the most commonly known tools are intrusion detection systems (IDS) and intrusion prevention systems. Both IDS and IPS devices can be deployed as either hardware devices or software applications.
In the case of IDS devices, signatures are used to detect and analyze traffic as it flows into and out of a network through the web server, hunting for known threats and abnormal activity based on the profile of the network. In practice, IDS systems work by:
- Scanning files moving through the network against known malware signatures
- Comparing processes and traffic flow against known patterns to flag potentially harmful deviations
- Monitoring user activity and behavior to identify potentially malicious actions
- Monitoring the web server for changes to device or network changes
If any of the above is detected, the IDS system not only initiates a system alert to security professionals, but often severs the connection of the hosts involved in the actions.
However, an IDS is not a fail-safe solution. Because IDS systems use previously identified and loaded signatures to locate abnormal activity and malicious content, zero-day threats — those that are new or have not been reported on — can pass through the network uninhibited and undetected.
These drawbacks are why IDS devices are often paired with an IPS, which can be used to constantly monitor and detect ongoing attacks by proactively inspecting your networks incoming and outgoing traffic for malicious content that should be flagged and filtered out. Similar to IDS devices, an IPS also drops malicious traffic, blocks known malicious IP addresses and alerts security professionals based on predefined rules, behavioral analysis and signatures set up in its associated database.
As with other security features for web servers, regular log reviews should be conducted to make sure rules and alerts are kept current for an organization’s network use as well as to help to identify and resolve potential false positives.
Web application firewalls
In addition to network firewalls, organizations can also deploy web application firewalls (WAF) to help secure their web-facing applications. WAFs can add another layer to your security profile by filtering traffic between your web applications and the internet and by continuously monitoring and logging activity between the two. To perform these functions, WAFs use a set of predefined rules and policies that can be easily updated and changed based on the need.
A WAF cannot protect against all types of attacks. But when used with other security tools, a WAF can help to mitigate the impact of SQL injections, cross-site forgery, cross-site scripting and file inclusion attacks — for example, by evaluating the commands that these attacks utilize against “normal” activity. Similarly, a WAF can help to blunt the impact of a Distributed Denial of Server attack (DDoS), by filtering out and rate-limiting traffic that could otherwise overwhelm the application that would degrade its function.
WAFs can be implemented in one of three different levels, including at the:
- Network level: WAFs at this level are usually hardware-based, requiring their own equipment, storage and maintenance. However, network-based WAFs can monitor multiple applications with a large capacity and minimal latency, helping to ensure security without degrading business operations.
- Host-based: A host-based WAF is integrated into the application’s software or the server running it. This type of WAF is often less expensive to deploy and operate and also allows for more customization as policies need to be written for only the target service. However, host-based WAFs utilize the same local application server resources and can introduce more complexity during application maintenance and implementation.
- Cloud-based: As with other cloud services, a cloud-based WAF can be an affordable and less-intensive option. Because cloud-based WAFs can be deployed with just a DNS-based redirection of traffic, implementation is very simple and ongoing maintenance is handled with monthly or annual service agreements. Once in place, the WAF can be continuously patched and the cloud provider can assist with modifications due to application-level changes. While many organizations enjoy this “hands-off” approach, others may not like having to work through an external party to make changes or triage a problem.
While not typically thought of as part of an enterprise security model, load balancers can contribute to web server protection. Load balancing is the process of evenly distributing incoming network traffic across a poll of servers — web or application — spreading the flow of incoming requests across the available devices so no one server is overburdened.
Load balancers essentially serve as a “traffic cop” set in front of your servers and handling hundreds, if not thousands or even millions of network events concurrently for content, images or application calls reliably and quickly. Throughout this process, load balancers are able to produce metrics about the health, responsiveness and performance of individual devices, alerting security professionals to potential problems that range from low computing resources to DDoS attacks.
Without a load balancer, systems can easily become overworked, degraded and kicked offline, shifting even more work toward other web servers and straining them as well. With one in place, placing new servers into the pool, load balancers are able to automatically reallocate requests across the available resources.
Load balancers can be implemented with one of several different methodologies, depending on the type and volume of network requests:
- Round robin: A simple technique where identical servers can provide the same services, configured for a DNS server to move through the server IP addresses sequentially
- Least connection: Unlike round robin, which just focuses on an order, this method takes current server load into consideration when new requests are sent. The server with the most computing capacity available is given the next connection.
- Response (or least) time: This technique weights current server response time with the current server load to allocate requests so that computing resources are used evenly over time.
- IP hash: When a source IP commonly connects to a web server, a system can create a hash from the requesting IP and the destination IP to generate a unique key. Each time a connection is requested, the source IP then connections to a specific server, allowing for the continuation of previous sessions or access to certain content.
Ultimately, load balancers are able to add many benefits to web server management that can indirectly add to security controls, including the ability to mitigate errors or attacks that can cause downtime, facilitating the use of redundancy within a server pool and adding flexibility to efficiently use computing resources.
File integrity monitoring
While IDS and IPS devices look for changes in network traffic or known signs of malicious content, organizations can also look toward file integrity monitoring (FIM) solutions to validate the reliability and integrity of your system.
FIMs are based on the fact that IT systems today store and analyze data in file-based architecture: system and application data, logs and operating system binaries are all organized in files. Whether by accident or with malicious intent, changes to organizational data and modifications to operating systems and application binaries can have huge negative effects on business operations.
In either case, when a change in these critical files are detected, completed by an unknown user, outside of a user time bound or in an usual way (depending on the configuration of the system), FIM technology helps to make sure these events are tracked and, when needed, security professionals are alerted.
When paired with other network monitoring solutions, an FIM helps to monitor for changes to systems regardless of where the user is. In practice, the FIM either is implemented within the kernel of the install device operating systems — so every file (hash) change is checked and logged in real time, or at an enterprise level where files and their attributes are periodically checked against a predetermined baseline.
Under both approaches, the FIM will alert security professionals for changes or deletions to monitored files and directories. The role that an FIM can play in an organization is especially important as its network infrastructure grows larger and larger, which equates to a bigger attacker surface and a larger “area” that malicious changes and activities can be made against configuration files, databases, logs and other applications.
Content delivery networks
Content delivery networks (CDN) are a little-known component of the backbone of the internet, but we interact with them many times every day without even knowing it. CDNs are by their design transparent, sitting between consumers and the servers that are owned by content providers.
Originally created to help reduce the negative effects of latency (the delay between a client-side request and the delivery of content by the server) from one of the main causes, physical distance, CDNs effectively shorten the distance between a user and a web server. To do so, a CDN stores a cached version of a website’s content in multiple locations around the world, improving the speed and performance of a website for those visitors closest to it.
In addition to improving the experience of visitors to an organization’s website, CDNs can also help to provide additional security features. Because they sit outside of your own core network infrastructure, CDNs can, for example:
- Help to balance unusual levels or type of web server requests
- Server as an additional load balancer
- Prevent the flooding effects of a DDoS attack
- Stop certain types of known malicious traffic from reaching your true web or application server
Cloud computing gives organizations another option when it comes to managing their network resources, data, infrastructure, services or all of the above. Depending on the services provided, an organization outsources the maintenance, security, management and provision of these services to a cloud provider, hosted at another geographic location. Additionally, these resources can be either privately managed (a private cloud) or shared among several customers (public cloud) and can also be easily scaled based on customer needs or payment model.
The benefit of a cloud model comes in the ability of the cloud provider to take over the ongoing maintenance, patching, management, triage and operation of services on behalf of their customer, leaving the customer with more time and resources to focus on other operations. As security events arise, attack methods evolve and web traffic changes, the cloud provider is expected to change to match them, steps that many businesses do not have the human or financial resources to handle.
However, it is important for organizations to remember that their cloud network is just like their own local network; it just happens to be managed by another team in another place. Because the same types of hardware, software, security features, staff and access controls apply, organizations still need to have equal dedication and awareness of how their services are secured and managed. It is, after all, their data.
Conclusion: Bringing it all together
This article in the web server protection series has covered the major infrastructure components that can be used to help harden your organization’s network security through one of potentially the most vulnerable parts of it. However, no single method or even a combination of methods can guarantee that your web server and the online presence that it enables will be defended against a determined attacker, now or in the future.
But by understanding how all the different web server protection components work together, including their individual strengths and drawbacks, organizations can take the needed steps to reduce the chances that they will fall victim to a cyber attack. Or, at the very least, frustrating an attacker long enough so they look for a more vulnerable target.
Learn Web Server Protection
Learn Web Server Protection