Security+ Study Guide: Domain 1.0 – Network Security
For many years, IT and Security were not usually on speaking terms: IT’s primary responsibility was to just get everything to work, and if it wasn’t to best practices or completely secure it wasn’t necessarily their problem. Now however, Security is a vital part of every single aspect of IT and as a result there are requirements to show that administrators, technicians and managers know this material. CompTIA’s Security+ certification is an advanced IT certification, but a starting level Security certification.
As a result it is a mixture of an enormous amount of information for both technical concepts and security ideologies. This makes it a great stepping stone for staff looking to specialize into security, and a requirement for those that need to acquire or maintain 8570 compliance. It is recommended that anyone starting out in IT go through the standard CompTIA ladder before beginning this certification: A+, Network+, Security+. Security+ assumes that users already know the information in the previous two certifications and test takers are required to hit the ground running.
To that end, the following can be used as a Study Guide for the CompTIA Security+ SY0-401 exam also known as Security+ 2014. This does not mean that the material in this Study Guide is not beneficial to those that do not have other CompTIA certifications; however in order to get the most out of it you will want to have some background familiarity already whether that is on-the-job, or in the classroom.
Let’s get started with our first domain: Network Security
1.1 Implement security configuration parameters on network devices and other technologies
Many home users use a software firewall on their PCs, due to cost concerns. While software firewalls can be very effective, they do slow down the performance of a system compared to a dedicated appliance- whether that appliance is physical, virtual or a repurposed server. Most small to medium organizations will use either a dedicated appliance or a re-purposed PC for their firewall, while larger organizations can vary wildly depending on their requirements.
Many home users use their router as an all-in-one device: DHCP server, switch, wireless access point, and firewall in addition to routing functions. While there is no reason why all of these functions cannot exist in the same device, as the requirements of the network become higher, these functions will be divided up onto separate dedicated devices.
Switches are the backbone of any network. While there is a strong push to go to full wireless access in many environments, the bandwidth and security of Wi-Fi cannot yet match that of a standard switched network.
• Load Balancers
When trying to keep up with extremely high demand to access a resource such as Amazon.com, having a single web server, no matter how large, is not enough to deal with rush demands and malicious attacks. Distributing that load to multiple servers, in multiple clusters, across multiple sites around the world helps to keep the lifetime of the servers more manageable, and the access times more reasonable for the users.
While it can be tempting to use a ‘free web proxy’ as opposed to paying for the service or rolling your own, because these systems are not known and trusted, it can be an even bigger risk using them than using your standard access method. The reason behind this risk is that all of your traffic will be passing through an untrusted spot that could be listening to both sides of your conversation. You transmitting credentials to the web server, and the web server responding with the data you are asking of it –a dangerous combination.
Note: TOR (The Onion Router) is not a traditional one-stop proxy server, but rather TOR runs requests through a series of jumps from network to network around the world before reaching its destination, helping to anonymize the user. Not the best solution for most people, as the time required to load websites and other resources increase exponentially when using TOR. It is also frowned upon for using the network for tasks such as peer-to-peer downloading and TOR exit nodes have been compromised in the past- potentially revealing the path used for traffic passing through the network, so use with caution.
• Web security gateways
A default gateway is always required in a TCP/IP style network; however, a web security gateway is slightly different. A web security gateway is still a default gateway, but also incorporates certain functions of a firewall, a proxy server, and a threat analyzer.
• VPN concentrators
Notes: The actual device known as a VPN concentrator is the backend server or appliance that allows clients to connect to the office. Users will use a VPN client on their pc’s to connect back to the office and gain access to resources as if they were on the local network. Users also gain an encrypted tunnel for traffic routed through the VPN, so in locations such as Hotel Wi-Fi hotspots, this can increase security dramatically.
• NIDS and NIPS
IDS (Intrusion Detection Systems) and IPS (Intrusion Prevention Systems) both give information regarding what is going on in your network at a higher level than anti-virus and anti-malware can provide. This is critical in a situation where you may be dealing with custom malware that signature-based defenses have not dealt with before. If you can i subtle variations in user activity, or communications from servers that don’t normally talk to each other- that very well could be the only warning you receive before your organization ends up on the 11 o’clock news. The large difference between the two is that IDS is purely passive listening, while IPS can perform some tasks on its own without user awareness. The two classes of IDS and IPS are based around looking at anom unusual activity compared to a baseline, or signature-based detection. Signature-based detection can be more accurate for what it sees, but that’s as far as it goes.
• Protocol analyzers
In large organizations, a protocol analyzer can mean the difference between business as usual, and vital information walking out the door- and more importantly, being able to show evidence of how, when, and where the information started moving.
• Spam filter
Spam is unwanted and/or dangerous email. Email filtering allows for either individual users or organizations to identify potential threats and block them before they can get to users.
• UTM security appliances
UTM’s can be viewed as a one-stop shop for network security for small to medium businesses (SMB), and can give them a substantial security posture for a fraction of the cost of higher-grade vendors. Many times however, these devices may come with limits that prevent them from scaling up for larger businesses- such as hard caps on the number of users that they can support.
• Web application firewall vs. network firewall
While specific types of traffic may be allowed to pass through a standard firewall because they are required for regular web access, those same types may prove dangerous when dealing with particular types of web apps. Therefore, an additional firewall would be necessary to prevent unintended access and keep malicious users from gaining information beyond what is allowed.
• Application aware device
There are actually three different kinds of filtering based on applications. The first is the traditional style used by applications such as software firewalls, that of whitelisting applications. For example, the standard Windows Firewall wants to know if the user will allow a specific application through the firewall. If that happens, any traffic starting with that application will be allowed. The second kind is more granular- allowing specific aspects of an application through while blocking others. For example, you may want to allow Microsoft Word users access to Word’s online help system, while blocking everything else but can take a long time to set up and may not be that much more effective than the first type if the program is regularly updated. The third kind however is a method of Deep Packet Inspection (DPI) known as Deep Content Inspection (DCI). While it is far more effective than just simply giving a specific application carte blanche permissions, some security professionals and privacy advocates believe it goes too far in the opposite direction- potentially giving Internet Service Providers (ISPs) and governments access to information that could be considered personal and protected.
1.2 Given a scenario, use secure network administration principles.
• Rule-based management
A person or group is granted access to what they need to perform their tasks and nothing more. This is done to protect both the organization and the user.
• Firewall rules
Firewalls have two types of rules: Explicit and Implicit. Explicit is laid out expressly to allow or deny access to a particular resource- for example, if you wanted to allow access to amazon.com to a particular ip range in your network. Implicit is a generic rule that is either inherited via other means, or are final ‘catch-all’ rules- for example, after all other rules are applied, you have a ‘deny all’ to block everything that doesn’t need access.
• VLAN management
VLANs are virtual networks within your overarching network. This allows for restrictions in traffic within the system, reduces the amount of traffic going across the entire network all the time, and can simplify network management considerably.
• Secure router configuration
Default settings on a router are extremely dangerous- potentially allowing access from anywhere on the planet. Either locking down the router so that it can be accessed only locally or through a console port can be a great start.
• Access Control Lists
Access control lists are the basis for nearly all types of file-level access. Whether it is granting access to individuals or groups, they are still assigned access via some form of list.
• Port Security
Port Security allows only specific ports – specific kinds of traffic- to be open at any given time. For example, on a web server you might only want ports 80 and 443 to be open while blocking everything else.
802.1x is a method by which clients can gain access to a network via a RADIUS or similar form of authentication server.
• Flood guards
A basic defense against DDoS attacks, flood guards attempt to prevent traffic that could potentially overwhelm the network. Please note however that to be effective, this needs to be adjusted manually to meet the needs of your network- it doesn’t come out of the box with automatic settings.
• Loop protection
A loop can form in a network when the path to a particular resource becomes confused. Node A is trying to send traffic to Node C through Node B- the shortest path that it knows of at the moment. Therefore, the traffic path would be A-B-C. However, if connection between B and C breaks- B will try to reroute the traffic coming in back to A and then through A to C directly. So the traffic path would be B-A-C. A still doesn’t know about the break though, so it still keeps trying to push the data back to B to get to C and so on.
• Implicit deny
Implicit deny is a firewall rule applied after all others have been. This is a ‘catch-all’ rule, designed to block all other traffic that has not been expressly allowed.
• Network separation
When there are networks that have different levels of information on them- such as a guest network and an internal network for example- they must be kept separate at all times. This can take many forms,
• Log analysis
Logs are generated by nearly everything in a network, monitoring all kinds of good and bad activities- but they are useless unless they are gone through. While at first, this sounds insane at higher levels, you do not need to go through every single line by hand- there are automated tools that are designed to look through logs for specific purposes and can then report those to you in a human-readable format.
• Unified Threat Management
UTMs can be viewed as a one-stop shop for network security for small to medium businesses (SMB), and can give them a substantial security posture for a fraction of the cost of higher-grade vendors. Many times however, these devices may come with limits that prevent them from scaling up for larger businesses- such as hard caps on the number of users that they can support.
1.3 Explain network design elements and components.
The DMZ is a place between the web and your internal network- where you can place outward pointing servers, but not have to open up your entire network.
The basic idea of a subnet mask is to quickly tell systems on a network where the network identifier stops, and the client space starts. For example, in a typical home network the Network space usually looks like 192.168.1.x. The identifier for the network takes up all of the first three octets, leaving the fourth octet free for clients to use and addresses to be issued. This gives us two critical values. The first is that this particular network has a 24-bit subnet mask- 3 octets having 8 bits each- leaving the final 8 bits free for clients to use. This lets us calculate out the second critical value- the subnet mask. This is usually shown in dotted decimal form as 255.255.255.0- where 255 means that this space is the network identifier, and the 0 is client space.
VLANs (Virtual Local Area Networks) are virtual networks within your overarching network. This allows for restrictions in traffic within the system, reduces the amount of traffic going across the entire network all the time, and can simplify network management considerably. For example, you can create one VLAN for sales, one for engineering, one for guests and so on.
Network Address Translation- originally created to avoid using up large numbers of IPv4 addresses for internal networks, it allows for the assignment of addresses in specific reserved pools but still allow communication outside of the local network.
• Remote Access
Remote Access extends the user access beyond the physical location of the network, usually via a combination of software and hardware such as a VPN tunnel or similar technology.
A term that most of the time it is used in the context of VoIP traffic and related technology, it can also sometimes be applied to anything that transmits voice.
Network Access Control- a catchall term for many different types of technologies- boils down to the idea of filtering what is actually allowed to connect to the network. One of the easiest ways to think about this is Sticky Ports. In a typical organization, open network jacks can be everywhere so it can be tempting for a visitor to pop down at an empty desk and plug in. With sticky ports on the network switch enabled however, unless they are authorized to that particular port the port will shut down immediately.
The raw power that server systems are capable of now means that if you have physical boxes for every single separate process in a typical organization, not only would you have a lot of idle equipment most of the time, but an enormous extra expense for all of that added electricity and cooling. Virtualization allows that to be reduced by having fewer physical servers, but each one is capable of running many virtual machines- individual running instances of operating systems- at once.
• Cloud Computing
Taking the idea of virtualization a step further, Cloud Computing allows organizations with enormous amounts of processing power, storage space, and bandwidth to sell that to organizations that don’t necessarily want the hassle of managing pieces of their infrastructure locally. This can take many different forms:
o Platform as a Service
A service that allows end users to develop software for the web without having to worry about a lot of the more deep down settings required to manage servers.
o Software as a Service
A step further down than Platform as a Service, this allows organizations to have most of their software stored, installed, and managed offsite. They access this software with a remote access session such as RDP or a web browser.
o Infrastructure as a Service
The bottom level of cloud computing, this allows full control to the remote organization, while the cloud providers maintain the hardware at their own facility. This can include virtual servers, offsite storage, and other networking-related functions.
A cloud structure that is locked behind the organization’s firewall, this allows for the benefits of cloud structure but only available to users if they are already connected to the network.
The default state of cloud servers, the servers are visible to the web at large. This does not mean that they are open to all users however, as they can still be locked behind requests for credentials and other authentication means.
In some cases it is feasible to create a hybrid cloud that brings together services stored both inside the organization and outside into one seamless entity. This can potentially cause serious security concerns, but if done properly can yield significant savings.
If an individual organization needs the resources of cloud computing, but does not have the finances or need for all of the capabilities, a community cloud can be established. This allows several organizations to share the costs and benefits, while addressing common security concerns.
• Layered security / Defense in depth
Defense in Depth is best described as ‘buying time’ -allowing lightly defended areas to fall to enemies, while at the same time removing enemy resources and giving more time to increase defenses in more significant areas. In order to properly test out a system, an attacker must be allowed to gain access during at least some scenarios. If a test or audit only tries to break in all the time and does so unsuccessfully, it will be unknown how much damage an attacker will be able to do when they finally do break in.
1.4 Given a scenario, implement common protocols and services.
Internet Protocol Security: one of the foundations of VPN tunneling. This allows data to be encrypted over an unsecured channel, such as the Web, and transmitted safely from end to end.
Simple Network Management Protocol: one of the best tools available for monitoring network attached devices. This can allow for the one-stop-shop monitoring of the status of printers, switches, access points, etc. without having to login to a dozen different portals. Depending on the version, SNMP can operate on ports 161 and 162 (SNMP v3), or ports 10161 and 10162 (Secure SNMP).
Secure Shell: an encrypted method of remotely connecting to devices for administration and tunneling insecure protocols across the web. This operates over TCP port 22
Domain Name System: a network’s phone book. DNS allows for the use of human-friendly names such as MYSERVER or www.amazon.com, without having to put in a potentially constantly changing IP address. DNS normally uses UDP port 53.
Transport Layer Security: the next generation of the Secure Socket Layer. Designed to prove that the computer on the other end of a connection is the one that it is supposed to be, TLS and SSL are commonly used to create encrypted connections from point to point. TLS and SSL protected traffic normally operates on ports that are different from their standard counterparts. For example: unprotected HTTP traffic operates on port 80, while protected HTTPS traffic operates on port 443.
Transmission Control Protocol and the Internet Protocol: the backbone of the modern network. TCP/IP is actually a suite of protocols, designed to work together to route traffic from source to destination. It does this by breaking down the data to be transmitted into small chunks, numbering them, and sending them over the wire to be reassembled on the other side. Because of this numbering scheme, TCP is able to re-transmit any data that is lost on its way over. The TCP/IP networking model has four distinct layers: The Link Layer (creates a connection over wired or wireless means), The Internet Layer (assigns an address to each node, and allows routing across multiple networks), The Transport Layer (allows data to be transmitted from node to node), and The Application Layer (User-visible operations)
File Transfer Protocol over SSL: an SSL variant of the popular FTP protocol. FTP is considered highly vulnerable without some form of protection, since it broadcasts its authentication and activity in clear text. Therefore, anyone listening on the wire can very easily use that data for their own purposes. FTPS operates over ports 989 and 990.
HyperText Transfer Protocol over SSL: an SSL variant of the incredibly popular HTTP protocol. HTTP is great for transmitting data that does not need to be protected, but if you are logging onto something like a banking website- you want to know for sure that nobody is going to be able to get your credentials and use them against you. HTTPS operates over port 443.
Secure Copy: a method of transferring files while staying within the protection of the SSH protocol. Traffic will remain within the created SSH connection on port 22.
Internet Control Message Protocol: a method to see if a remote device is responding. The very popular ping command operates using ICMP packets to see if the targeted system is available. While Firewalls and even individual systems can choose to block ping, it is still a useful tool in the early stages of troubleshooting. ICMP operates on IP port 1.
Legacy IP addresses in the 123.456.789.000 format with an estimated maximum number of public IP addresses of 4.3 billion. This sounds like a lot until you start to think about how many networked devices the average person has associated with themselves at any given time.
New-type IP scheme, presented in the 1111:2222:3333:4444:5555:6666:7777:AAAA format. Unlike IPv4, this addressing scheme is in Hexadecimal. Combined with the larger addressing style, this potentially can have up to 3.4 x 10^38 addresses.
Internet Small Computer System Interface, a protocol used for Storage Area Networks to fool servers into thinking they have very large local hard disks. iSCSI usually uses TCP ports 80 and 3260.
Another type of connection for Storage Area Networks, this requires dedicated connections instead of using existing network connections. Note that while the naming convention suggests that this operates over fiber connections exclusively, this is not always the case.
Fiber Channel over Ethernet, allows the Fiber Channel Protocol to be used over standard network connections.
Legacy method of transferring large files, this protocol operates over TCP port 21 (and sometimes 20) and broadcasts in the clear.
Modern method of transferring large files, this protocol operates over port 22 using an encrypted SSH tunnel to transfer files.
Trivial File Transfer Protocol, this was primarily used for transferring files onto dedicated devices such as switches. Operating over UDP port 69, it has been superseded by SSH.
Telnet is a program normally used for testing connections and remote administration. Operating by default on port 23, this has been primarily replaced by SSH.
HyperText Transfer Protocol, the primary protocol used for accessing Web Sites. Operating on Port 80, like FTP it transmits data in clear text and is slowly being replaced by HTTPS.
Network Basic Input Output System, it was a precursor to DNS. The primary way that users interact with it today is its naming convention, which must be 15 characters or less. NetBIOS operates on ports 137, 138 and 139.
FTP (File Transfer Protocol)
SSH (Secure Shell)
SMTP (Simple Mail Transfer Protocol)
DNS (Domain Name System)
HTTP (HyperText (Transfer Protocol)
POP3 (Post Office Protocol v3)
NetBIOS Session Service
IMAP (Internet Message Access Protocol)
HTTPS (HyperText Transfer Protocol over SSL)
RDP (Remote Desktop Protocol)
• OSI relevance
While most networks operate on the TCP/IP model, OSI is the primary theoretical method used for dividing up functions required of systems, services, protocols and connections. OSI contains 7 layers: Physical, Data Link, Network, Transport, Session, Presentation, and Application. TCP/IP on the other hand has four layers, each of which correspond with counterparts in the OSI model:
Physical, Data Link
Transport, Session (in part)
Session (In part), Presentation, Application
1.5 Given a scenario, troubleshoot security issues related to wireless networking.
Wi-Fi Protected Access, successor to the broken WEP (Wired Equivalent Privacy) and succeeded by WPA2 (Wi-Fi Protected Access v2), was the first standard that gave reasonable security that has yet to be compromised when implemented correctly. However, both WPA and WPA2 can be compromised with a known vulnerability with WPS (Wi-Fi Protected Setup). When WPS is disabled, both WPA and WPA2 are reasonable for use in Wireless Networks.
Extensible Authentication Protocol is a framework for use when authenticating via Wi-Fi. This can be carried in many different forms, including the very popular RADIUS.
Protected Extensible Authentication Protocol, combines EAP inside of a TLS connection for added security.
Lightweight Extensible Authentication Protocol, was developed by Cisco and built around WEP. This has since been replaced by EAP and PEAP.
• MAC filter
Media Access Control filtering allows the creation of a blacklist or more effectively a whitelist for what is and is not allowed to connect to the network. MAC addresses are unique to individual devices, so this is a safe (and recommended) enhancement to existing security measures.
• Disable SSID broadcast
Disabling the Service Set Identification broadcast of a Wireless Access point can allow only users that know where it is and what its name is to connect. This is not 100% effective on its own, since users with appropriate software can still detect that the network exists by checking the frequencies in use in a given area. Like MAC filtering, it is recommended to be used in conjunction with other security measures.
Temporal Key Integrity Protocol was a temporary resolution to try to secure Wi-Fi after showing that WEP was not enough. TKIP has been superseded and is not believed to be secure.
Counter Mode Cipher Block Chaining Message Authentication Code Protocol, is encryption introduced with 802.11i, was implemented in part with WPA and fully in WPA2. 802.11i was designed to replace the compromised RC4 with the Advanced Encryption Standard (AES), a global standard for security.
• Antenna Placement
Wi-Fi Antenna placement can affect signal quality in a wide variety of ways depending on the number of antennas in use, the number of users in a given area, and the materials used in the construction of the building they are occupying. It is recommended to try to place antennas in areas with easy line-of-sight in order to allow for the best possible signal.
• Power level controls
Power levels in general for Wi-Fi equal signal strength- the higher the power usage, the greater the usable range of the wireless signal. This needs to be controlled effectively however, as you only want to cover your authorized area. Being able to sit in your car and watch Netflix outside of work is nice, but it also means that the person next to you can be trying to break into the network. Like most security measures, it is vital to give what is needed to do the job effectively and no more.
• Captive portals
A captive portal blocks access to all other traffic until the user authenticates. When the user opens up a web browser, they will be automatically redirected to a login page that will require either entering credentials, a payment method, or just accepting the Access Point’s use policy.
• Antenna types
The type of antenna in use determines where the Wi-Fi signal will be focused. Most homes and organizations use ‘Omni-Directional’ antennas, which allow for complete coverage within a given area, while ‘Directional’ antennas allow for much greater range but in a specific direction.
• Site surveys
Planning a Wi-Fi network can be a difficult process since every building creates its own challenges. For example, if you just look at the blueprints for a building, you could easily place Access Points in areas that seem like they would best serve the organization. When you are inside the building however, the material that it is made out of could easily block signals, there could be machines and appliances blocking radio frequencies, and other issues. With this in mind, performing a walkthrough of the building and examining the interference levels – a site survey- can better give an idea of where access points would best be placed.
• VPN (over open wireless)
A Virtual Private Network can be very useful for accessing data located remotely. There is however another benefit, since it can route all traffic through this encrypted tunnel. This means that in areas that have open wireless access such as a coffee shop or hotel, the connection can be used if necessary with relative security. That being said, it is still not recommended to ever attach to an open access point if possible.
Looking to master the security plus certification for compliance? Check out InfoSec Institute’s Security+ Training Bootcamp. Simply fill out the brief form below for more details and course pricing.