Avoid Disaster with Monitoring and Logging
If you have ever been at the epicenter of a security breach relating to online web apps and services, you know just how important your system logs can be as you to try piece together just what went wrong. All too often, we find in the aftermath of a catastrophe that basic logs were not being maintained, allowing valuable clues to slip away. Let’s look at some basics and see how you too can cover yourself in the event of a breach or security failure.
The OWASP Top 10 list for 2017 found that Insufficient logging and monitoring was a rising cause for concern among security professionals. This is because many attacks on web apps, and the resulting security breaches that follow, can be prevented altogether if log files and security sensitive data are properly analyzed. To this end, there are several areas that need to be investigated, checking to see if a specific application is vulnerable. To do this, attack vectors need to be identified, security weaknesses must be evaluated, and potential impacts should be planned for and mitigated if possible.
Threat Agents and Attack Vectors
A potential web application breach can be detected and dealt with if logging and monitoring is in place so system administrators and product developers can keep a watchful eye on anything out of the ordinary. Almost all major web app breaches come as a result of insufficient logging and monitoring. Attackers are able to probe applications and systems to determine if log-files are being monitored and whether issues are being responded to.
A highly effective method of testing your current monitoring and logging systems is to examine your logs after a penetration testing regimen. In order for this to be successful, you need to be keenly aware of what the tester is doing, and at what time. This will help you to cross-reference timestamps against your logs and monitoring solutions and, if anything has been missed or has not been identified correctly in your logs, you will know what areas need to be refined and improved upon.
As previously stated, most successful attacks begin with a series of vulnerability probes and scans. If such activity is not detected early and then halted, there is nearly a 100% likelihood that a successful attack will be the end result. IBM reported that in 2016 it took an average of 191 days for such attack staging activity to be detected.
Application and System Vulnerability
An application is deemed to be potentially vulnerable if there is insufficient logging and auditing, detection, and monitoring, with effective responses. Areas where a lack of logging is potentially dangerous to your environment include:
- Auditable events such as user logins, failed login attempts, and unlogged activities such as high-value transactions and system related changes.
- Warnings and errors that are not logged, or warnings and errors that produce no meaningful output. Ambiguous and unclear logs are also not helpful to system administrators and developers.
- Important logs of critical applications and APIs are overlooked when auditing and system logs are set up, meaning that entire systems can go on without ever being monitored, causing massive potential for security breaches in critical IT systems.
- Logs being stored only locally gives intruders an opportunity to edit the logs and remove any evidence of their activities in your environment.
- Alerts and responses that are not implemented correctly, or at all.
- Applications might not have ant detection, logging or alerting systems built into them, meaning that attacks go unnoticed by system administrators.
It is also worth noting that logging and alerts that are visible to users and attackers will make you more vulnerable to data leakage, meaning that your reporting, logging and alert systems needs to be kept private and secure.
Here is a brief outline of some preventive steps that can be taken to ensure that the data that is stored and processed by the application is kept safe.
- Ensure that all of the login details, access control failures, and server end input validation failures can be logged in a readable, meaningful way. This will help administrators or users to identify any suspicious, malicious or unusual accounts. These logged records should be held long enough so that, if a review is necessary, users can perform a delayed forensic analysis.
- The format of the logs must be in an organized, human-readable format that is contextually relevant to the people who have to review and analyze this data. It must be centralized and accessible only to users or staff members who are authorized to access it.
- High-value transactions, such as financial- or legal-related activity, must have an audit trail facility with data integrity measures that will alert users to any data tampering, deletion or modifying. An example of this is something like an append-only series of tables within a database.
- Monitoring and alerting systems must be managed properly so that suspicious activity is reacted to and met with the appropriate response levels within an acceptable timeframe.
- Establishing or adopting an incident response and recovery plan like the NIST 800-61 rev2 is recommended.
Examples of Attack Scenarios
OWASP has outlined some attack scenarios that reinforce how important it is for companies to institute proper monitoring, logging and threat response plans when dealing with cybersecurity.
An open source project’s forum software that is run by a small team of developers gets hacked by exploiting a flaw in the forum’s software. Because hackers are able to gain access via this security flaw, they are can maliciously wipe out an internal source code repository that contains the next release version of their software project, as well as all of the forum content that has accrued over a period of time. Although some source code could be recovered, there is a lack of monitoring, logging, or alerting systems. This has led to a catastrophic failure for the project, resulting in the project being abandoned and development work ceasing.
An attacker uses scanning tools to detect users who are using simple or common passwords. Because this account has elevated access, the attacker is able to take over all user accounts within the organization with this single password. Although the scan was only successful on one account, the damage is done, leaving only a single false login attempt for each user account. Because there is no active monitoring or alert systems in place, the network administrators are not aware of this successful scan, leaving the attacker to return at a later time when they wish to infiltrate the system further.
A major U.S.-based retailer has an internal malware analysis sandbox that analyzes email attachments. Although the sandbox is able to correctly identify threats which it flags as fraudulent transactions, there is nobody actively monitoring the sandbox software. The fraudulent transactions are eventually flagged by an external bank, which holds the retailer in a bad light publicly. This is because they had the necessary tools and systems in place, but the perception is that there is no will to deal with such instances of fraud.
Insufficient logging and monitoring can destroy a thriving business, software project, or government department. It is for this reason that proper tools to collect, analyze, and respond to such threats is of the utmost importance to proactive cybersecurity professionals. The Infosec Institute offers a wide variety of courses that will help you and your team to grapple with the ever-changing landscape of cybersecurity, and will ensure that you are able to design and implement systems that will empower your team to track, detect and react to malicious and suspicious activity in your environment.
- OWASP Proactive Controls: Implement Logging and Intrusion Detection
- OWASP Application Security Verification Standard: V8 Logging and Monitoring
- OWASP Testing Guide: Testing for Detailed Error Code
- OWASP Cheat Sheet: Logging