Logging and Monitoring: What you Need to Know for the CISSP
Information technology drives productivity and growth in almost every industry today. One computing device contains more confidential information than thousands of documents in hardcopy. Institutions and individuals increasingly rely on computers, smartphones, and servers in their professional and daily lives. Modern telecommunication technology allows these devices to exchange information instantly via the Internet. Terabytes of data travel across millions of computing devices every second around the globe. The swiftness and convenience of information technology are made possible through a complex architecture of networks, servers, and devices. A piece of data therefore passes through many media before reaching its destination. In this speedy journey, there are thousands of cyber-attacks targeting the vulnerabilities of various communication channels. Moreover, stealing the information stored on the hardware is more profitable than the hardware itself. There are many approaches of exploiting the intercepted and stolen data. Identity theft, social engineering and log forging/injection are some popular examples of malicious use of sensitive data. Protecting institutions and individuals from these threats is an imminent challenge for cyber-security professionals.
The universal adoption of computing devices in various industries generates a massive quantity of data. In fact, the data generation process creates an equally vast amount of data log files. These files are also known as event logs. Monitoring and analyzing the log files is as important as managing all the other audio, video, and text files. Information security professionals might neglect the importance of log file monitoring for two reasons: the lack of resources and the lack of expertise. Cyber-security is a multidisciplinary field that demands solid competence in multiple fields. Many professionals in information technology have specialized in one specific domain since the beginning of their careers. Therefore, it is not uncommon that a network security engineer is unfamiliar with database management. As the interconnectedness of challenges in cyberspace increases, security professionals have to enrich their experience and expand their skills to other security branches. The certified information security systems professional (CISSP) helps security professionals acquire and master the necessary competencies. Monitoring data logs is one of the eight domains covered in the CISSP certification. Understanding the various approaches of managing and monitoring log files is an indispensable step to becoming a well-rounded security professional.
An Overview of Log File
A log file, also known as an event log, is an automatic documentation of the operations a computer device and its user perform, such as file creation/modification time, user access, adjustments, to name a few. Log files contain critical information for organizations. Some popular examples of log files are authentication logs, audit logs, system logs, intrusion detection system (IDS) and intrusion prevention system (IPS) logs. They serve as the detailed record of security events that can be used to reconstruct the sequence of a network or system intrusion event. The intrusion time, contaminated servers, and attack pattern can be traced and analyzed via these log files. Most significantly, it is important not to neglect the national security regulations and compliances of the organization. Nowadays, cyber-security guidelines and requirements constitute the most important priority for many nations. For many industries, adhering to such conditions is a prerequisite and it substantially helps organizations to establish their internal cyber-security management framework. In case of a security incident, the log files can serve as the primary source of investigation. Moreover, monitoring them effectively can contribute to the identification of previously unknown system or application bugs. The significance of log files is therefore not to be underestimated.
There are several key aspects of log management. First, it is the vast quantity of log files. Since they are generated automatically during the operations of software applications and computer systems, the amount of log files can be huge. Log files can be generated by web servers, computing devices, and applications. It is important to define log management policies adapted to different sources and types of log files. Besides, as log files usually represent the root of monitoring and analysis, malicious attackers can launch log forging and injection attacks to mislead the log administrator. The extensive quantity of log files also raises the question of data lifecycle. Allocating the appropriate resources to collect, store, and archive log files can generate a series of management questions to organizations. Thus, managing such diversified sources and high quantities of data can be an effortful task for organizations.
A successful log monitoring strategy demands both the efforts of human intervention and machine automation. On the one hand, the security administrator plays a crucial role in setting up rules and policies for the security event management (SEM) systems as well as internal workflow and user access privileges. Automated responses and alerts can be created according to anomalies detected on firewalls, routers, IDS, and IPS. This prepares the SEM to considerably alleviate the workload for security officers. On the other hand, as most log files are routine records of system and software operations, the configured SEM can categorize and filter the majority of conventional events. Security officers can thus concentrate on the suspicious and irregular log files. The following perspectives offer further guidelines on managing the log files:
One successful log management scheme begins with evaluating the operations of the institution. It is imperative to define the criticality of the system, in other words, which aspects of the operation, namely, payment, communication, customer and supplier database, are of utmost, secondary, or least importance to the institution. In this way, the security officer can establish conditions to facilitate follow-up actions like incident response, data extraction and analysis. A second underlying principle in log review is the user access privilege. For example, the security officer can surveille system irregularities according to clearance principles such as need-to-know (NTK) and least privilege (PoLP). Having clearly reviewed system criticality and user access privilege, the security officer can decide many parameters of log reviewing, such as review time interval and frequency, user authorization and authentication, log removal record, log audit, to name a few. If there are multiple products of different vendors involved in the operation, the security officer can also unify the log file format to ensure the interoperability of data extraction, combination and analytics tasks.
Monitoring logs is a detection process instead of a prevention process. Establishing a monitoring mechanism is to ensure the capabilities of an institution to resist against and recover from system intrusions. Hence, institutions should ensure an exhaustive understanding of their operating systems, networks, computing devices, and personnel so as to optimize the log monitoring.
The different aspects of log management reflect a complex chain of data life cycle. Log files pass through the cycle of generation, collection, examination, storage, archiving, and deletion. The approach to analyzing log files is therefore highly diversified. In each phase, the purpose of log analysis is different. For instance, one major consideration in collecting and keeping log files is to satisfy industry compliances, security policies, or simply internal regulations. These policies usually require periodic log audits to examine risks for the institution, notably intrusions, violations, system dysfunctions, and user privilege exploits. Log auditors can work with computer forensic experts to dynamically analyze suspicious log files demonstrating signs of unauthorized access, data removal, and irregular logging time.
As discussed, the quantity of log files is vast. Creating a structured frame to prioritize and select those that are worth analyzing to the fullest can economize a great deal of resources. Thus, a centralized SEM is necessary to coordinate the management of log files. This is particularly true for a large institution having multiple log monitoring systems that demand adequate storage space. In addition, detecting anomalies is a highly sophisticated mission. It requires assembling and matching user authorities with different networks and computing devices across different structures of the institution. Besides dealing with the quantity of log files, the security officer should also pay attention to the data transfer process when sending data to the centralized SEM. The delivery process might risk log forging and injection attacks in unencrypted communication channels. Institutions might need to analyze log files to comprehend user behavior, both external and internal ones, to develop management strategies.
Continuous and Egress Log Monitoring
Another primary concern in log management lies in the monitoring process. Despite the fact that a great majority of log files generated during each software and system operation and transaction is routine documentation, it can cause a great deal of damage to underestimate the remaining minority of potentially harmful log files. Such negligence might corrupt the entire system. As the threat landscape evolves fast, new attack approaches and cyber-camouflage can take immediate advantage of newly discovered vulnerabilities before the security officer notices. Institutions should be wary of their log monitoring approach, as it might become obsolete fast. Hence, continuous monitoring offers a paradigm to help institutions evolve their log management policies seamlessly with the latest known attacks. The underlying approach comprises four steps: discover, analyze, tune and report. The security officer should be capable of updating the log monitoring policies with these steps. He should be able to tune and enhance the entire log management strategy each time a problem is identified in the system.
Apart from continuously monitoring the log files, egress filtering is an equally important practice in log management. In fact, egress filtering signifies outbound data control. The firewalls and routers of the institution are always the guardian of both incoming external and outgoing internal traffic. Most of the time, egress filtering has two elementary purposes. On the one hand, it can limit the internal networking devices to visit untrustworthy web contents as well as send malware on behalf of the organization. The firewalls and routers stop the internal computing devices from continuing the operation so as to prevent purposeful or accidental insider risk. On the other hand, egress monitoring can further be configured to impose control on internal computers to limit the authority of selected users to perform actions such as visiting external websites and sending sensitive information to external networks. By doing so, the log administrator can restrict the user access privilege to a narrower user pool and thus maximize monitoring resources of the institution.
Log Protection through Security Information and Event Management (SIEM), Intrusion Detection System (IDS), and Intrusion Prevention System (IPS)
The information illustrated in a log file might seem elementary at first glance. However, as they always serve as the primary analysis source in case of a security event, the integrity, authenticity, and confidentiality of log files are of the utmost importance above all other follow-up actions. As suggested, a successful log management strategy involves both human intervention and machine automation. Defining as well as constantly adapting and updating the collection and priority rules for log management tools are all vital responsibilities of the security officer. One of the efficient tools is SIEM. It is a combination of SEM and security information management (SIM) to ensure that the log files respect the three principles of log management. The former gathers the log files in a centralized storage with reference to the real-time collection; the latter permits a holistic analysis on the collected log files to oversee the attack pattern and automate appropriate response/report according to relevant compliances and regulations. SIEM is a strong tool. It converges collected log data from IDS and IPS about intrusion attempts, irregular authentication, and authorization patterns. Since many IDS and IPS are integrated with firewalls nowadays, the log files generated from this aspect are of high value for SIEM and the security officer to anticipate and update security monitoring policies for the institution.
SIEM, IDS, and IPS technologies, together with a seasoned team of cyber-security professionals, can be a great asset for institutions. The only commitment that institutions have to consider might be the deployment cost, because these technologies can be costly to set up and require highly specialized workforce to configure, operate, and maintain. Nonetheless, since the threat landscape in cyberspace intensifies rapidly, like advanced persistent threats (APTs), more and more institutions have to adopt SIEM, IDS, and IPS to assist and strengthen the competencies of security officers.
Log files are automatic documentation of software applications and computing systems. As log management requires a rigorous investment of human and machine resources, the significance of log monitoring might be misjudged and therefore it can affect compliance, incident response, and forensics vis-à-vis security issues. In the CISSP logging and monitoring domain, candidates are required to review the basics of log files, to understand their lifecycle and management approaches, and to use practical tools in order to build a comprehensive security scheme for institutions. Despite the importance of managing and monitoring log files, CISSP candidates should also keep in mind the interdisciplinary nature of the other seven training domains.