Author: Russell

Near misses are very valuable signals regarding future losses. If we ignore them in our cost metrics, we might make some very poor decisions. This example shows that there is a qualitative difference between “ground truth data” (in this case, historical cash flow for data breach events) and overall security metrics, which need to reflect our estimates about the future, a.k.a. risk.

Read More The Cost of a Near-Miss Data Breach

Politics and power can manipulate the “ground truth data” we depend upon. Case in point: the VP residence image on Google Earth is still blurred, even though VP Dick Cheney has been out of office for almost a year. Could similar things happen in InfoSec data if it were more visible and public? You bet.

Read More VP's residence is still blurred on Google Earth (political influence on data and its long shadow)

The SANS Top Cyber Security Risks report has received a lot of positive publicity. I applaud the effort and goals of the study and it may have some useful conclusions. We should have more of this. Unfortunately, the report has some major problems. The main conclusions may be valid but the supporting analysis is either confusing or weak. It would also be good if this study could be extended by adding data from other vendors and service providers.

Read More Making Sense of the SANS "Top Cyber Security Risks" Report

We can learn from bad visualization examples by correcting them. This example is from the newly released SANS “Top Cyber Security Risks” report. Their first graphic has a simple message, but due to various misleading visual cues, it’s confusing. A simplified graphic works much better, but they probably don’t need a graphic at all — a bulleted list works just as well. Moral of this story: don’t simply hand your graphics to a designer with the instructions to “make this pretty”. Yes, the resulting graphic may be pretty, but it may lose its essential meaning or it might just be more confusing than enlightening. Someone has to take responsibility for picking the right visualization metaphor and structures.

Read More Visualization Friday – Improving a Bad Graphic

An “InfoSec risk scorecard” attempts to include all the factors that drive information security risk – threats, vulnerabilities, controls, mitigations, assets, etc. But for the sake of simplicity, InfoSec risk scorecards don’t include any probabilistic models, causal models, or the like. It can only roughly approximate it under simplifying assumptions. This leaves the designer open to all sorts of problems. Here are 12 tips that can help you navigate these difficulty. It’s harder than it looks.

Read More 12 Tips for Designing an InfoSec Risk Scorecard (its harder than it looks)

We need more cross-disciplinary research and collaboration in InfoSec. We start on a small scale, starting with people in our professional network. One fertile area of research and collaboration is to apply the latest research in non-standard logic and formal reasoning (a.k.a. AI) to InfoSec risk management problems. The problem is that most of that research reads like Sanskrit unless you are a specialist. Rather than simply post links to academic papers and ask you to read them, let’s use these papers as a vehicle to start a dialog with an academic friend, or a friend-of-friends. Maybe there are some breakthrough ideas in here. Maybe not. Either way, you will have an interesting experience in cross-discipline collaboration on a small scale.

Read More This Friday is “Take an Academic Friend to Work Day”

Luther Martin, blogger with Voltage Security, has advised caution about using of risk risk management methods for information security, saying it’s “too complicated and subtle” and may lead decision-makers astray. To backup his point, he uses the example of the Two Envelopes Problem in Bayesian (subjectivist) probability, which can lead to paradoxes. Then he posed an analogous problem in information security, with the claim that probabilistic analysis would show that new security investments are unjustified. However, Luther made some mistakes in formulating the InfoSec problem and thus the lessons from Two Envelopes Problem don’t apply. Either way, a reframing into a “possible worlds” analysis resolves the paradoxes and accurately evaluates the decision alternatives for both problems. Conclusion: risk management for InfoSec is complicated and subtle, but that only means it should be done with care and with the appropriate tools, methods, and frameworks. Unsolved research problems remain, but the Two Envelopes Problem and similar are not among them.

Read More Is risk management too complicated and subtle for InfoSec?

The National Cyber Leap Year (NCLY) report coming out in a few weeks might lead to more US government research funding for security metrics in coming years. But that depends on whether the report is compelling to the Feds and Congress. Given the flawed process leading up to the Summit, I have my doubts. Clearly, this NCLY process is not a good model for public-private collaboration going forward.

Read More National Cyber Leap Year: Without a Good Running Start, There Might Be No Leap