Part of doing business in today’s increasingly cyber-world is dealing with security vulnerabilities and bugs that come up along the way. Many organizations first learn about a vulnerability or bug by receiving a security vulnerability or security bug report from a security researcher. Those who have not received one of these reports yet, and those with a less-than-desirable security profile, you may not know how exactly to handle the report.
This article will explore how to handle security vulnerability and bug reports by detailing what to do when you first get the report, how to verify the report’s findings, how to respond to the report and best practices for what to do after you receive the report.
Before moving on with any tips or best practices, the first thing you need to learn is to relax when you first get it. Do not take it as an indicator that you are necessarily doing anything wrong, but that (depending on the report) something has simply popped up. The important thing at this point is reacting appropriately, which will be explored below.
Security vulnerability disclosure policy
The security vulnerability disclosure policy is the guideline an organization uses to establish who gets reported to, who verifies and other responsibilities with regard to vulnerabilities. One of the most important pieces of information it contains is who security vulnerabilities and bugs should be reported to. The fact of the matter is it can be hard to determine who to contact at an organization let alone getting a hold of that individual once you identify them.
Organizations have been releasing their security vulnerability disclosure policy online, and the top item it normally lists is the email to send security vulnerability warnings, bug warnings and reports to. The policy should also state how the findings should be reported to the organization, who will verify the report’s findings and/or investigate, next steps and any other important information regarding how security vulnerability reports will be responded to.
Quick and efficient access to the right person can be the determining factor in whether an organization responds in time to a vulnerability or bug. One such example occurred in the city of Florence, Alabama. Information security expert Krebs on Security (Brian Krebs) alerted the city in late May of 2020 that their IT systems had been breached.
After being given a phone call runaround, Krebs was ultimately thanked by the systems administrator of the Florence Police Department. However, the attackers launched their attack on June 5th and the city’s data was held for ransom by DoppelPaymer, a particularly aggressive ransomware. Twelve days after being alerted, the city ultimately fell victim to a ransomware attack that a simple, clearly stated security vulnerability disclosure policy could have prevented by identifying who to report security vulnerabilities and bugs to.
Who should verify the findings of the report?
While every organization is different, the findings of the report should be investigated by a trusted information security professional. This may be a member of the security team, the Chief Information Officer, a trusted system administrator or anyone else that the organization trusts with this process. This individual should be named in the security vulnerability disclosure policy and if at that time they are not yet known, reference can be made to the department they belong to. Next steps in the investigation should be included in the policy, as well as well as a time frame for root cause identification and resolution.
Who should respond to the security vulnerability report?
This question is a little harder to answer because it really comes down to what is contained in the report. If the report has found a false positive or has identified a vulnerability that has previously been resolved (but was beaten to the punch by another security researcher), response to the report can be little more than an acknowledgement to the reporter that the situation is currently being investigated.
If the vulnerability or bug is more complex, or will take rounds of testing and regression testing, responding to the report will be potentially a longer process with regular updates throughout the whole process. The key here is to update the reporter (or anyone else the reporter designates) on a regular basis.
Below is a list of best practices that will help you improve how your organization verifies and responds to security vulnerability reports:
- Create a security vulnerability disclosure policy. It should contain:
- Who to contact if a security vulnerability is found
- The time frame, including response time, that the organization has established
- A little about the investigation process including who will verify or investigate the contents of the report
- Next steps if possible
- Any other important processes or information related to security vulnerability verification and response
- Treat vulnerabilities like defects and performing all testing and regression testing that is reasonable and necessary
- Establish a triage process for security vulnerability reports
- Establish a standard of prioritization — you can accomplish this by designating a codification systems for report items and then working on the items in a prioritized manner
Security vulnerability and bug warnings and reports may be alarming at first but they are at the heart of information security in the real world. Sometimes, an organization is unaware of a vulnerability until it is reported. The important thing is that the organization has an established security vulnerability disclosure policy in place and follows the best practices listed above when verifying and responding to the security vulnerability report or warning.
- Florence, Ala. Hit by Ransomware 12 Days After Being Alerted by KrebsOnSecurity, KrebsOnSecurity
- So You Just Received a Vulnerability Report. Now What?, Charles Hooper Blog
- How to handle security vulnerability reports, CIO
- Getting a vulnerability report, CertNZ