More on CVSS
Erik Rescorla takes note of my CVSS post, and comments that he’s not sure he likes some technical aspects of the system (emphasis added):
CVSS does have a formula which gives you a complete ordering but the paper doesn’t contain any real explanation for where that formula comes from. The weighting factors are pretty obviously anchor points (.25, .333, .5) so I’m guessing they were chosen by hand rather than by some kind of regression model. It’s not clear, at least to me, why one would want this particular formula and weighting factors rather than some other ad hoc aggregation function or just someone’s subjective assessment.
I haven’t studied the CVSS scoring system to decide if I like it or not. But I do think its a big win over subjective assessment, and offers an interesting replacement for the CERT Metric. Some numerical analysis of the threat is useful if you would like a process to decide on if or when to patch. (Doing this really well, as Erik says, “requires some pretty serious econometrics.” But establishing a repeatable process can use simpler math, and still provide value. But you can’t have a repeatable process with subjective vulnerability assessment.)
When I was thinking a lot about patch management, and the risk tradeoffs, an objective number would have been great. We wanted to measure patch risk versus threat severity, and tie that tradeoff to the fiscal costs of interruptions as well as system MTTR and its variability.
So I’m willing to accept that CVSS is wrong, and still believe that it will be useful until replaced with something better.
(Interestingly, Ashish Arora, Ramayya Krishnan, Anand Nandkumar, Rahul Telang and Yubao Yang had a paper at last year’s Econonmics of Information Security, Impact of Vulnerability Disclosure and Patch Availability – An Empirical Analysis (PDF only) in which they dissect the CERT metric.)