The OWASP top 10 list of vulnerabilities has long been the source of data that information security professionals trust with making critical security decisions. There are many different sources where you can read about these vulnerabilities, especially from OWASP itself.
We thought that we would take a slightly different approach and talk about some of the best practices that you can institute when trying to apply the OWASP recommendations. This means a brief description of the problem and then some ideas on how you can implement the solution, using the best practice recommendations by OWASP themselves. Let’s get started.
Injection vulnerabilities cover issues and flaws that have to do with SQL, NoSQL, OS and even Lightweight Directory Access Protocol (LDAP). The injection vulnerability happens when an attacker sends malicious data that is seemingly legitimate or harmless into an application or interpreter as part of valid user input. This means that although the data appears to be legitimate to the application, the command that is executed will have unintended and negative consequences to the system.
Luckily, there are many things that can be done to prevent these kinds of attacks. The main best practice approach is to institute data validation within your application. This means that user input fields must be tightened up. Only characters that are absolutely necessary must be allowed. This prevents an attacker from slipping in a command or query into a text input because the character will be validated and rejected, based on your design.
Another important place to strengthen your defenses against injection is in your patching. Applications, services and operating systems institute security updates and vulnerability patching on a regular basis, and this is one of the best and easiest ways to ensure that your applications and systems are resistant to injection attacks.
2. Broken authentication
Any time an app is designed that needs to implement user authentication, there are risks of broken authentication being exploited. Session management is often incorrectly used, which adds to risks of broken authentication. When this happens, it is possible for an attacker to access passwords, session tokens and keys.
If authentication is broken or not properly implemented, it is possible for user impersonation, which grants an attacker access to sensitive parts of the system.
There are many things that can be done to prevent broken authentication. Multi-factor authentication is one of the most effective methods of preventing unauthorized access via broken authentication exploits. Product owners can also minimize their exposure by ensuring that their applications don’t ship with default credentials and passwords.
It’s also important to protect user choices for passwords by enforcing minimum password requirements such as length, complexity and password reuse restrictions and rotation.
Other steps include username and password recovery, as well as the use of the same message for all outcomes by hardening the API pathways against account enumeration attacks. All login failure events need to be logged, and if credential stuffing or brute-force attempts are detected, then system administrators need to be alerted. Server-side authentication is important because it prevents attackers from breaking authentication on a local system. Session IDs must not be displayed in the browser, as this allows attackers to use session keys as an attack vector.
3. Sensitive data exposure
Sensitive data contains information about finances, medical data and Personally Identifiable Information. If an attacker steals or modifies this data, it opens the door to fraud and other crimes, such as identity theft and impersonation. Without additional safety mechanisms, this sensitive data is at risk.
Encryption at rest or in transit methods have to be introduced so that the data is kept safe regardless of whether it is currently running in the system or safeguarded in storage.
There is a lot that can be done to minimize the exposure of sensitive data. The first thing is data classification. You need to specify what data has been processed, stored or transmitted by the application or systems that are running. From there, you will need to be able to easily identify which data is deemed to be sensitive according to the rules and legal requirements in your area.
Once you have classified your data according to these standards, you will have to apply controls. Another potential solution to this problem is to not store any sensitive data in the first place. If the data has to be stored, then frameworks such as PCI DSS tokenization and truncation must be implemented. If data is at rest, then it has to be encrypted for added security.
Basic steps such as keeping algorithms and protocols up to date is very important, as well as proper key management for encryption.
For added protection, it is recommended that all data in transit must be encrypted with secure protocols such as TLS along with Perfect Forward Secrecy ciphers (PFS). Enforcing encryption through HSTS is also an option. Disable the caching of sensitive data and only use salted hashes. Once all of this is in place, verify all of your security measures independently to ensure proper implementation.
4. XML external entities (XXE)
Older XML processors evaluate external entity references inside of the XML files themselves. Badly configured XML processors also exhibit the same behavior. This is a problem because external entries can be exploited to reveal internal file information by using a URI handler. Other items that are exploitable are internal file shares, internal port scanning, remote port scanning, remote code execution and DOS attacks.
The best way to prevent this vulnerability from being exploited is to disable DTDs (External Entities). This also prevents DOS attacks from being leveraged. If you can’t disable DTDs outright, then external entities and external document type declarations have to be disabled specific to each parser that you expect to be accessing the system.
5. Broken access control
Often, the restrictions put in place that curb what actions authenticated users are authorized to use are not enforced. If this is the case, an attacker can execute commands that they should not have access to.
This is accomplished by either elevating the access rights of a standard user or simply by executing commands from another user’s account. When this is achieved, an attacker is able to access other users’ accounts, access files and sensitive data, copy, edit and delete other users’ data, alter access rights and cause damage to systems in general.
The primary defense against this vulnerability is to deny access by default. This means that any action that a user can perform needs to be locked down and inaccessible unless they perform several verification steps.
Access controls must be implemented to and used across the application, also ensuring that CORS usage is minimized. Model access needs to be enforced that implements record ownership. This means that users cannot create, read, update or delete records.
Domain models need to enforce business limit requirements, and web server directory listings must be disabled. This prevents file metadata and backup files from being present inside of web roots.
Logging issues such as access control failures and detection of repeated failures are important, as they can lead to proactive controls being implemented before a warning becomes a bigger issue or security event. Automated attack tools are limited by instituting rate limits against APIs and controller access, making it much less effective when an automated attack takes place. Finally, JWT tokens need to be invalidated on the server when a logout occurs.
6. Security misconfiguration
Security misconfiguration is possibly one of the most commonly seen issues that opens systems up to attacks. Easily avoidable issues such as insecure default configurations, incomplete or ad hoc configurations and open cloud storage solutions are often contributing factors in a security breach. Other common issues include misconfigured HTTP headers and verbose errors containing sensitive and private data.
In order to ensure that security steps are properly instituted all systems must be securely figured, patched and upgrades.
Creating a repeatable hardening process that is fast and easy to deploy is essential when deploying a new environment. Uniformity is important, which means that dev, QA and production environments need to be configured identically, but with different credentials used for each of them.
The process needs to be automated to allow for effortless deployments and system rollouts. The platforms that are used must be equipped with only the features that are absolutely necessary, as this minimizes the attack surface that can be leveraged by an attacker.
Remove unused services, frameworks and features to avoid providing further attack surfaces to an attacker. Administrators need to stay up to date with all patch release and security notes, as well as updates. This all forms part of the patch management process.
A segmented application architecture is one of the strongest defenses against security misconfiguration. If you are responsible for client systems, then you need to ensure that security directives are sent out regularly.
Finally, automated processes must be implemented that verify how effective the configurations of each environment have been implemented. This accelerates the process and allows for testing in a standardized and uniform manner.
We have taken a look at six of the top OWASP vulnerabilities as they stand currently. For each of the vulnerabilities listed, we also looked at some of the ways to mitigate and prevent the dangers that each present.
These best practices offer a practical guide for people to follow when checking their own status as it relates to the OWASP vulnerabilities that are currently affecting systems globally. By following these simple steps, you too can harden your systems and prevent potential issues within your own network.