YFVS Sidebar 1: Shortcomings of Existing Systems
When I started looking for existing vulnerability scoring systems, I was surprised to only find two in widespread use: CVSS version 2.0 (by far the most common), and Microsoft's DREAD. Some sources referred to one or two others, but I was never able to find any details, and assume they refer to very old systems that are no longer used.
I'll covering DREAD first because it's easy.
In its favour, this Microsoft-developed scoring system has a cool, ominous name and only five scoring elements, so it should be very quick and easy to use, right?
Unfortunately, no — because everything about the scores is completely subjective. That MSDN article linked above has an example of a table that can be used to be slightly more objective, but the original method is to score each value from 1-10, and what constitutes a "1" versus a "10" is up to the people doing the scoring. This sort of ambiguity and potential for disagreements on scores are why DREAD has been "dead" for almost a decade as of this writing. Unfortunately, not everyone has gotten the news, and a lot of documentation still refers to it.
Numerous other authors have written at length about the faults of CVSS 2.0. The ones I referred to most frequently when trying to build a better scoring system were The CVSSv2 Shortcomings, Faults, and Failures Formulation (Carsten Eiram, Risk Based Security and Brian Martin, Open Security Foundation), and CVSS for Penetration Test Results Part I and Part II ("Darkstructures" and Tim Maletic, Trustwave® SpiderLabs®).
My top personal gripes about CVSS 2.0 are:
CVSS 3.0 looks like it will have numerous improvements compared to version 2.0. So let's get started on using it, yes? Too bad it's been in development for literally 29 months as of this writing. The organization responsible for it makes every effort to prevent the draft version from being used in the meantime.