On Uncertain Security
One of the reasons I like climate studies is because the world of the climate scientist is not dissimilar to ours. Their data is frought with uncertainty, it has gaps, and it might be kind of important (regardless of your stance of anthropomorphic global warming, I think we can all agree that when the climate changes, crazy things can happen).
Recently, the mainstream press has begun to pick up on this, and trying to explain what science is doing. One such example is this Times (UK) story called
Scientists Need The Guts To Say, “I Don’t Know”
In it, the author (David Spiegelhalter – Professor of the Public Understanding of Risk at the University of Cambridge) discusses uncertainty in past (and forward) looking predictions. Yes, it’s worth noting that the science of prediction applies to all three states of time: past, present, and future.
As a security professional, I always encourage the representation of uncertainty. Depending on the audience, I’ll represent uncertainty technically, or at a high level with words like “back of the napkin, very rough, a lot of unknowns, fairly certain, pretty good idea…” I’ve found that as long as they are properly qualified, demonstrations of risk with high degrees of uncertainty are not unuseful.
HEY, YOU GOT YOUR VISIBILITY INTO MY UNCERTAINTY!!! AND YOU GOT YOUR UNCERTAINTY IN MY VISIBILITY!!!
They really *are* two great tastes that taste great together….
One of the great reasons for the IT Risk management/Security team to communicate uncertainty (esp. to others with money) is that if you say “here’s what we think but we’re not sure “, you can then tell the business owner “and if you give me $funding we can decrease that uncertainty by gaining visibility into $whatever”. If they decline, they’re accepting both the risk and the probability that you’re wrong. But if they’re uncomfortable with the uncertainty, now you have a pretty good qualitative way of knowing that their tolerance for this level of risk is pretty low, and you might even be able to skip right past the “buy more visibility” step above and move right into “of course, we can just spend $Y and take care of the whole thing, visibility, risk reduction and all….”
Similarly, if you, the security manager, keep getting risk analyses back that have significant uncertainty in them – you know that these are areas where you really don’t have much control. They may represent reasons or opportunities to strengthen policies, processes, capabilities (w00t everybody goes to training in Cancun!) and so forth.
So while it’s also the enemy of accuracy, uncertainty can also be your friend.
One last note, having to do with uncertainty; in the article the author uses the Taleb definition of “Black Swan”. Again, calling a rare event a “Black Swan” is a misnomer. Rarity in frequency is only one aspect of what the concept of Black Swan represents. A much better definition of a Black Swan is “an occurance which is not representable at all given our prior distributions. Certainly, even after before Prof. Spiegelhalter corrected the model for double yoked eggs – the occurance of 6 is not a true Black Swan. We could have run MCMC sims until our computers melted into hot lumps of toxic waste and various occurrences of double yoked eggs would/could have been represented.