Lurnene Grenier has a post up on the Google/Microsoft vunlerability disclosure topic. I commented on the SourceFire blog (couldn’t get the reminder from Zdnet about my password, and frankly I’m kind of surprised I already had an account – so I didn’t post there), but thought it was worth discussing my comments here a bit because I think we can see a difference between evidence-based risk management or New School Security and expert opinion. I’m not trying to rip on Lurene here, far from it. But disclosure is such a crazy topic for our industry that I think we should look to back up all the logical assertions we make.
For example, Lurnene says:
“when a vulnerability becomes public it is no longer as useful for serious attackers”
I have to ask, do we have data set to support this claim? What Lurnene is saying makes sense, right? Bad guys like to use special toys for various reasons, not the least of which is our inability to Prevent or Detect those. But to really test this hypothesis, we’d actually need to have a rational scale for describing threat capability and then match a frequency component for particular vulnerability/exploit uses for that population – and then compare that frequency component to a data set that describes known 0day use in data breaches.
Again from the SourceFire Blog:
“The companies with high-value data that are regularly attacked are able to proactively protect themselves.”
My 7th grade science teacher would toss this out on its ear as a hypothesis. And I’m not just picking on Lurnene here, we (the security industry) do this all the time – making statements without enough definition around our loaded terms like “high-value data”, “regularly attacked” and “proactive protection”. As I say in my comments, my experience (small sample size warning) is that there isn’t necessarily always a correlation between “high-value data” (where high-value data is financial, medical, trade secret, or government/defense data) and ability/willingness to create “proactive protection”.
Even more frustrating is when he says “these companies patch within some time frame”. Not faulting Lurnene here, just really lamenting that there isn’t a public data store where I can see “some time frame” and compare against data for “uptick” of attacks.
YOU DOWN WITH APT? (YEAH YOU KNOW ME)
“The loss due to a 20+ company exploit spree such as “Aurora” is significantly greater than the monetary loss due to low-end compromises which can be cleaned with off the shelf anti-virus tools.”
Reviewing the data sets I have at my disposal, I’m seeing:
1.) I don’t have a good estimate for hard (or soft) costs for “Aurora”, though I suppose I would accept a “high” qualitative label.
2.) data supporting that breaches of significant value are predominately caused by tools that are not able to be “cleaned with off the shelf anti-virus tools.” Rather, I’m seeing data that supports the notion that the for the significant portion of data breaches, the effort to prevent could have been classified as “simple and cheap” (source: VZ DBIR).
Finally, I’ll add my own editorial point here, just so Lurene can rip me back 🙂
I think I would have difficulty asserting that we should *only* care about ” large corporations, government, and military targets with the goals of industrial espionage and military superiority.” Off the top of my head, I can think of hundreds of millions of records exposed by data breaches that came from organizations we might say are in the “SMB market”.
BOTTOM LINE: Defending the faith might be a lot easier if there were data to support the defense.