Security+: Risk Management Best Practices (SY0-401)
Please note: this article is based on information about the previous version of the Security+ exam (SY0-401), which expired in May of 2018. For updated information, please see our up-to-date Security+ listing.
Cybersecurity professionals deal with a constantly changing threat landscape. Information security knowledge becomes obsolete fast and professionals need regular interdisciplinary training to keep up with the industry pace. It is very important to recognize the need for a skillful labor force ready to cope with the management challenges of cybersecurity.
The Computing Technology Industry Association (CompTIA) is recognized as a prominent leader in training cybersecurity professionals with up-to-date industry knowledge. The Security+ training (SY0-401) offered by the CompTIA is an industry-recognized certification to enrich the cybersecurity risk management savoir-faire.
Security+ (SY0-401) and Career Prospects
Security+ certification is a career boost for information security professionals. The U.S. Department of Defense (DoD) recognizes the Security+ certification as the baseline qualification for Information Assurance Technical (IAT) level 2 and Information Assurance Management (IAM) level 1 in its DoD Directive 8570. The Security+ certification is valuable in improving the income of cybersecurity professionals, in particular those working in network security and administration. The average annual salary with the Security+ certification ranges between 42,128 to 95,829 USD. Moreover, for candidates with five years of experience, their average income is approximately 66,887 USD annually. The DoD recruitment policy as well as the salary figures both reveal the value of the Security+ certification.
The Security+ certification covers six domains of expertise. They are 1) network security, 2) compliance and operational security, 3) threats and vulnerabilities, 4) application, data and host security, 5) access control and identity management, and finally 6) cryptography.
Business Continuity in a Nutshell
As one of the exam domains of Security+, business continuity can be summarized as the planning and reactions of an institution when it encounters a disruptive event, whether it is triggered by internal or external factors. Examples are power outages, natural catastrophes (hurricanes, floods, tsunamis, earthquakes, typhoons, etc.), or a vengeful employee wiping out valuable data. All pose severe challenges to the business routine of an institution. What solutions should the institution carry out in order to ensure its operations and services to be uninterrupted? In what ways should the risk assessment be done? What actions should be included in the disaster recovery plan? These questions and others must be answered by institutions that want to achieve business continuity. In the Internet-based economy of today, a simple power disruption can paralyze a considerable number of industries. In some scenarios, physical assets can be damaged and thus cause long-lasting impact to an institution’s services.
Business continuity hence requires an anticipatory perspective to improve the resilience of the business. Careful planning is required to ensure that the business routine is not affected. It should provide an outline for businesses to identify their key products, services, activities, and external threats. If a disruptive event takes place, the business will be ready to carry on its activities and recover from the event, regardless of its size and cause.
Business Continuity Plan (BCP)
In order to address the aforementioned concerns in an increasingly connected world, a well thought-out and comprehensive business continuity plan (BCP) is indispensable to ensure the continuity of the services and operations of the institution. The security officer can draft a successful BCP based on a three-phase structure: 1) plan and business context, 2) BCP outline and implementation, 3) testing and maintenance. The first section can adopt a risk assessment approach to examine the background of the institution as well as identify critical assets in case of an interruptive incident. In this first section, the security officer should outline the business environment and nature as a fundamental step so that the relevant personnel and resources can be assigned in the planning development, and execution phases. Secondly, this section allows the security officer to conduct an exhaustive risk analysis regarding the internal and external factors of the institution, namely, the geographical location and its climatic conditions, business activity scope, and employee clearance. This section should not neglect the budget of the plan because there is no point in allocating a significant budget to recover non-critical assets. It can happen that the recovering costs of certain assets could be higher than the reacquisition costs. Once the threat landscape of the institution is defined, the security officer can move on to the second section to create the BCP outline and implementation. This part is the heart of the BCP. Procedures such as backup and recovery strategies, emergency response steps, action plan implementation, preparation of recovery steps, to name a few, are to be described in detail. The BCP outline and implementation should correspond to the first section of the BCP. Finally, the BCP has to be tested, verified, and updated regularly to ensure it always works with the business scope and context of the institution. The risk circumstances can evolve fast in cyberspace. There is no point in following a BCP drafted 10 years ago for a disruptive event of today.
Business Impact Analysis (BIA)
To further elaborate the BCP, it is also imperative to take business impact analysis (BIA) into account. BIA can be confused with BCP. The two terms are considered synonymous. However, BIA is an additional, yet essential, part of a well-developed BCP. BIA adopts a posterior perspective to simulate the impact of a security event for the institution. It focuses on the consequences of an undesirable event. It helps anticipate the loss and damages for an institution. Most of the time, it plays an initial role in paving the path for the risk assessment phase. Before going into the details of BCP, it is a prerequisite to conduct a BIA, followed by a risk assessment. This step lays the foundation of BCP and, more important, sows the seeds for the recovery actions and procedures.
The core of a BIA is contingency planning. Imagine this scenario: An earthquake destroys the headquarters building of a major global media organization. Its offices, computers, data center, and networks are all buried in ruins. On the one hand, it is indispensable to have a redundancy plan, which means to have an exact same (or multiple) set of these infrastructures in an alternative location. Given the same or a replacement, officers can operate immediately, and the redundancy strategy would contribute greatly to the immediate recovery of the services of this organization. The downtime can be minimized as much as possible. On the other hand, it is a reasonable practice to prepare “beyond the worst” by planning to have additional resources and personnel ready for unanticipated impact. In summary, BIA helps contextualize the BCP and its findings can be reflected directly in the redundancy and contingency planning of the core contents of the BCP.
BCP and BIA provide many essential and background concepts to integrate business continuity into the management level of an institution. As discussed, the interconnected nature of business activities today makes information technology hardware highly strategic in the BCP. Therefore, understanding the technical aspects of preparing these critical hardware elements is necessary to deliver a successful planning initiative.
To begin with fault tolerance, it is the first and foremost jargon phrase to learn for dealing with critical hardware failure. Fault tolerance refers to the ability of a computer or network system to continue its operations and services without interruption despite the fact that one or several of its components fail to function as a result of a security/catastrophic event. Besides hardware, fault tolerance also applies to errors and issues that occur in software and the logic layer. This is particularly true in the business environment today, when many institutions operate primarily in software-based and virtual environments. The impact of a physical disaster affecting the hardware might not be worse than a cyberattack disrupting the software aspect. Fault-tolerant technologies are usually embedded in various parts of the hardware, namely, the motherboard, processor, or network cable, to name a few. They automatically detect failing errors so as to optimize uptime and reduce downtime of the system service. Fault tolerance is the primary concept for a series of executable actions. For example, a redundant array of independent disks (RAID) is the method of storing the same data on multiple hard disks to prevent data from becoming inaccessible. RAID adopts the technique of disk mirroring or striping to create a partition on the hard disk. There are many standard industry levels ranging from 0 to 6 or combining two levels, as well as non-standards such as RAID 7, Adaptive RAID and RAID S to cater to the needs of an institution. Another example is clustering (distributed computing) and load balancing. These methods help to improve the efficiency of a specific task by dividing it among multiple computers/device/servers. In other words, they actually can serve as a precaution or redundancy plan, because the failure of one piece of equipment cannot paralyze the entire chain of computing devices. The cost of replacing or restoring the particular failing item is limited. Thus, it echoes the fault tolerance perspective to minimize the impact of a disruptive event.
In addition, regardless of the comprehensive considerations of fault-tolerant technologies, institutions should not neglect to acquire a secondary/backup power supply system in the face of a business disruptive event. After all, a stable power supply is the root of all recovery options. None of the above-mentioned ideas would work without electricity. Getting an additional power supply determines exceptionally the effectiveness of the BCP.
Disaster Recovery Plan (DRP)
A disaster recovery plan (DRP) can either be grouped into a BCP or developed independently. The major difference between the two plans is that the BCP is the plan to get prepared prior to a disruptive event, while the DRP is the plan to recover from such an event. The former is precautionary, whereas the latter is reactionary. DRP suggest the procedures, steps, and execution methods corresponding to simulated scenarios outlined in the BCP; for instance, losing a major building that contains all the critical IT infrastructures and devices. In the DRP of such a scenario, objectives have to be defined to restore functions and operations that are critical to the institution as well as the time needed in the recovery process. Furthermore, since the DRP provides post-event recovery actions, it has to have thoughtful backup plans. In this backup part, institutions have to ask themselves questions like what data and digital assets should have a backup copy; how often should backups be made; and, most important, where (physically) the backup should be stored. The criticality and priority of the data as well as the cost of conducting regular backup should be taken into account. One particular point about delivering a successful DRP is to evaluate the cold, warm, and hot nature of the backup sites. Suppose the usual operation site is deprived of normal functioning for a certain period of time. The institution has to continue as well as recover its operations in another site. The terms cold, warm, and hot refer to the investment that the institution is willing to commit. A cold site requires the least preparation. It can be a minimally configured space for the institution to rebuild its capacities. A warm site is a reasonably prepared place where critical equipment is installed and sufficiently updated data sets are available for the institution to restore normal operations at an acceptable pace of recovery. A hot site is a mirrored backup location in which the institution can restart operations immediately with everything lately updated and maintained. The difference between these three different levels of backup sites is the cost. Depending on the value of business and its assets, institutions can select the appropriate option in their DRP.
It is worth mentioning that the institution should be clear about their main objectives in the recovery process in case of a disruptive event. For example, it is not realistic to set goals such as 100% recovery the day after the disruptive event takes place at a smart vehicle production base even though the institution has a hot backup site. Institutions should set progressive objectives, meaningful to their own business nature, to approach in recovering from the unfortunate event. Same as BCP, DRP has to be tested, verified and updated according to the evolution of business activities and threat environment.
The six domains covered by the Security+ certification go beyond the expertise of a technically trained professional. The complexity of today’s threat landscape in cyberspace demands that professionals adopt a holistic view in preparing their institution for the worst. The business continuity perspective undoubtedly fills the gap of technical experts in managing the challenges of a disruptive event. The three modules, BCP, BIA and DRP, help institutions outline and evaluate their options in an undesirable situation. Candidates for the Security+ certification can enrich greatly enrich their knowledge through recognizing the importance of these business continuity practices.