Will CVSS v3 change everything? Understanding the new glossary
The Common Vulnerability Scoring System (CVSS) enables organizations to use a common language when dealing with vulnerability threats. Since its initial release in 2003, CVSS has been implemented by many organizations.
Today, CVSS standards are used by many major vulnerability databases, including government databases like the National Vulnerability Database (NVD). However, the CVSS of 2003 is not the same as the one used today. This article reviews the most recent changes introduced in the latest version of CVSS.
What is CVSS?
The Common Vulnerability Scoring System (CVSS) is a standardized framework for rating known vulnerabilities in components. It was developed by the US National Infrastructure Advisory Council in 2003 and has been maintained by Forum of Incident Response and Security Teams (FIRST) since 2005.
This framework is designed to help security professionals quantify the threat associated with a given vulnerability, prioritize work and help create a common language of threat categories. CVSS scores are used by most major vulnerability databases and information sources as a way to help security professionals filter threats.
What’s new in CVSS 3
CVSS 3.1 is the most recently released version of the framework. It was released in 2019 as a partial update of version three to improve upon and clarify the standards, but it did not add additional metrics. This is part of an ongoing effort to make CVSS more accessible and easier to use.
For example, when the 3.0 specification document was released, it was accompanied by a user guide and a document containing examples. These resources help ensure that the standards are being used correctly. This was not done for other versions.
Despite changes to make the standard more user-friendly, understanding how to implement CVSS v3 can still be a challenge. There are several changes you should be aware of — changes to metrics and changes to formulas.
Changes to metrics
In the base metrics, user interaction (UI) and privileges (PR) metrics were added. These metrics were extracted from the access vector metric. This was done to help highlight vulnerabilities that required additional circumstances or access to be exploited, meaning vulnerabilities that had less potential impact.
A scope (S) metric was also added to help highlight vulnerabilities that can be used as part of a larger attack. In the attack vector (AV) metric, a sub-metric measuring physical access requirements was added. Small changes were also made to the scales of metrics to allow greater flexibility of definition.
For environmental metrics, the measures were completely replaced with a modified base score. This score is meant to allow security teams to develop a score catered to their specific environments and configurations.
Changes to formulas
There are several formula changes that you need to be aware of to calculate a consistent CVSS score. In general, these changes have been made to make scoring clearer and to remove any ambiguities. These include:
- Formula restructuring: Including the change of formula variables for clarity. This also addressed an issue in environmental metrics in which increasing certain metrics led to a lower overall score
- Redefinition of the round up function: Now called Roundup, this function addresses small variations that can occur due to differences in floating point math functions in various languages and platforms
How CVSS scoring works
Using CVSS, scores are determined by a combination of base score and temporal score.
You can then modify this score to match your specific systems via environmental metrics. Scores are scaled from zero to 10, with 10 representing the highest severity.
While only the base metrics are required to generate a score, it is recommended that temporal metrics be included for general scores. Additionally, you should use environmental metrics when planning your own security efforts.
The base score is designed to reflect the static qualities of vulnerabilities that do not vary according to environment or time. Base score is determined by a combination of the following subscores.
The exploitability subscore rates how easily a vulnerability can be exploited by attackers. It is based on a combination of metrics defining attack vectors (AV), attack complexity (AC), privileges required (PR) and user interaction required (UI).
All metrics except UI are scales which range from high barriers to low, with low representing higher scores. UI is a binary metric in which interaction is or isn’t required.
The impact score rates the security impact, such as attacker access or privileges, on systems being exploited and measures the change from pre- to post-exploit. It is based on a combination of metrics defining confidentiality (C), integrity (I) and availability (A). These metrics include a range of none, low and high.
The scope subscore relates to the impact that a vulnerability has on components that aren’t directly vulnerable. It includes any resources contained under the same set of access controls or security authority as the affected component.
The temporal metrics remain the same from CVSS 2. These metrics measure the current state of code availability or exploit techniques, the accuracy of the description of the vulnerability and the existence of workarounds or patches. Metrics include exploitability, remediation level and report confidence.
Changes in environmental metrics are perhaps the most significant improvement in CVSS. These metrics enable you to customize scores according to individual relevance. These metrics include modifications of confidentiality, integrity and availability as well as:
- Collateral damage potential (CDP): What additional damages may occur, including loss of life, loss of physical assets, asset damage or theft and economic losses
- Target distribution (TD): The proportion of vulnerable systems
Challenges with CVSS
Although CVSS 3.1 has greatly improved the scoring from version 2, there are still some remaining challenges. These include:
- Scoring requires detailed knowledge: To develop a score, you need detailed knowledge of exploit factors and countermeasures. However, this knowledge is often not required to actually exploit a vulnerability leaving a disconnect between scoring and effect
- Lack of guidance for IoT: Although the specs offer expanded guidance for vulnerabilities in libraries, the scoring doesn’t work well for statically compiled binaries. For example, those used in Internet of Things (IoT) devices
- No cross-vulnerability measures: CVSS scoring accounts for a single vulnerability and how it can affect systems. It is not well equipped to account for compounded vulnerabilities and relies on analysts to be able to make connections between the two without guidance
- Lack of score updates: CVSS scores are static. Once a score is created, it is often not updated to reflect current conditions. This is fine if all parties remediate vulnerabilities immediately but doesn’t account for vulnerability risks down the line
CVSS v3 is a great improvement on previous versions. For one, it simplifies official documentation — enabling users to better understand the framework’s concepts. The addition of the examples document is especially helpful, because it helps clarify how to use the concepts. The changes to metrics can help better classify vulnerabilities relating to users and privileges, and formula changes can help avoid vague calculations. These are all excellent modifications that can help improve how vulnerabilities are scored.
- What Is CVSS v3.1? | Understanding the New CVSS, WhiteSource