[ Beneath the Waves ]

YFVS Sidebar 1: Shortcomings of Existing Systems

Ben Lincoln


When I started looking for existing vulnerability scoring systems, I was surprised to only find two in widespread use: CVSS version 2.0 (by far the most common), and Microsoft's DREAD. Some sources referred to one or two others, but I was never able to find any details, and assume they refer to very old systems that are no longer used.


I'll covering DREAD first because it's easy.

In its favour, this Microsoft-developed scoring system has a cool, ominous name and only five scoring elements, so it should be very quick and easy to use, right?

The sky shall swallow them, and the land shall burn
[   ]

© Copyright 2011 Gary Goddard Entertainment


Unfortunately, no — because everything about the scores is completely subjective. That MSDN article linked above has an example of a table that can be used to be slightly more objective, but the original method is to score each value from 1-10, and what constitutes a "1" versus a "10" is up to the people doing the scoring. This sort of ambiguity and potential for disagreements on scores are why DREAD has been "dead" for almost a decade as of this writing. Unfortunately, not everyone has gotten the news, and a lot of documentation still refers to it.

CVSS 2.0

Numerous other authors have written at length about the faults of CVSS 2.0. The ones I referred to most frequently when trying to build a better scoring system were The CVSSv2 Shortcomings, Faults, and Failures Formulation (Carsten Eiram, Risk Based Security and Brian Martin, Open Security Foundation), and CVSS for Penetration Test Results Part I and Part II ("Darkstructures" and Tim Maletic, Trustwave® SpiderLabs®).

My top personal gripes about CVSS 2.0 are:

  1. The complete reliance on the "CIA Triangle" (confidentiality/integrity/availability) — may seem superficially useful to people who have studied information security theory, but in my opinion means that virtually all real-world vulnerabilities have to be shoehorned in instead of being neatly categorized and rated.
  2. The C/I/A impact values are the foundation for the rest of the score, but in too many cases the poor granularity of the system means the result will be "partial" for one or all three, and the unsurprising result is that most CVSS 2.0 scores cluster around 5 (out of 10) instead of making use of the whole scale.
  3. Because vulnerabilities are generally scoped to the host system in CVSS 2.0, the likelihood that the impact score(s) will be "partial" is increased many times over.
  4. Too many of the scoring criteria are partially or completely subjective.
  5. The scoring guidelines explicitly state that vulnerabilities should only be scored in isolation from each other. This is foolish. If your system has the root and Postgres SA passwords stored in a file that is world-readable locally (CVSS 2.0 score: 6.8 — and that's being generous), and a local file-inclusion vulnerability in an anonymously-accessible web application (CVSS 2.0 score: 4.3), and the Postgres port is network-accessible (CVSS 2.0 score: 5, if I bend the rules to label "network accessibile service" a "partial" confidentiality impact), I just took it over even if remote SSH access is not permitted. The sum of these vulnerabilities scores a 9.3 using the CVSS 2.0 method, but the documentation explicitly says not to do this.

CVSS 3.0

CVSS 3.0 looks like it will have numerous improvements compared to version 2.0. So let's get started on using it, yes? Too bad it's been in development for literally 29 months as of this writing. The organization responsible for it makes every effort to prevent the draft version from being used in the meantime.

[ Page Icon ]