Author: Russell

By looking for evidence first, the Brits do it right

068170-britain-royals-jubilee-pageant
Looking for evidence of effectiveness

As it happens, both the US Government and the UK government are leading “cyber security standards framework” initiatives right now.  The US is using a consensus process to “incorporate existing consensus-based standards to the fullest extent possible”, including “cybersecurity standards, guidelines, frameworks, and best practices” and “conformity assessment programs”. In contrast, the UK is asking for evidence that any proposed standard or practice is beneficial or even “best”.

The Brits are doing it right. I hope the US follows their lead.

Indicators of Impact — Ground Truth for Breach Impact Estimation

Ice bag might be a good Indicator of Impact for a night of excess.
Ice bag might be a good ‘Indicator of Impact’ for a night of excess.

One big problem with existing methods for estimating breach impact is the lack of credibility and reliability of the evidence behind the numbers. This is especially true if the breach is recent or if most of the information is not available publicly.  What if we had solid evidence to use in breach impact estimation?  This leads to the idea of “Indicators of Impact” to provide ‘ground truth’ for the estimation process.

It is premised on the view that breach impact is best measured by the costs or resources associated with response, recovery, and restoration actions taken by the affected stakeholders.  These activities can included both routine incident response and also more rare activities.  (See our paper for more.)  This leads to to ‘Indicators of Impact’, which are  evidence of the existence or non-existence of these activities. Here’s a definition (p 23 of our paper):

An ‘Indicator of Impact’ is an observable event, behavior, action, state change, or communication that signifies that the breached or affected organizations are attempting to respond, recover, restore, rebuild, or reposition because they believe they have been harmed. For our purposes, Indicators of Impact are evidence that can be used to estimate branching activity models of breach impact, either the structure of the model or key parameters associated with specific activity types. In principle, every Indicator of Impact is observable by someone, though maybe not outside the breached organization.

Of course, there is a close parallel to the now-widely-accepted idea of “Indicators of Compromise“, which are basically technical traces associated with a breach event.  There’s a community supporting an open exchange format — OpenIoC.  The big difference is that Indicators of Compromise are technical and are used almost exclusively in tactical information security.  In contrast, Indicators of Impact are business-oriented, even if they involve InfoSec activities, and are used primarily for management decisions.

From the Appendix B, here are a few examples:

  • Was there a forensic investigation, above and beyond what your organization would normally do?
  • Was this incident escalated to the executive level (VP or above), requiring them to make resource decisions or to spend money?
  • Was any significant business process or function disrupted for a significant amount of time?
  • Due to the breach, did the breached organization fail to meet any contractual obligations with it customers, suppliers, or partners? If so, were contractual penalties imposed?
  • Were top executives or the Board significantly diverted by the breach and aftermath, such that other important matters did not receive sufficient attention?

The list goes on for three pages in Appendix B but we fully expect it to grow much longer as we get experience and other people start participating.  For example, there will be indicators that only apply to certain industries or organization types.  In my opinion, there is no reason to have a canonical list or a highly structured taxonomy.

As signals, the Indicators of Impact are not perfect, nor do they individually provide sufficient evidence.  However, they have the very great benefit of being empirical, subject to documentation and validation, and potentially observable in many instances, even outside of InfoSec breach events.  In other words, they provide a ‘ground truth’ which has been sorely lacking in breach impact estimation. When assembled as a mass of evidence and using appropriate inference and reasoning methods (e.g. see this great book), Indicators of Impact could provide the foundation for robust breach impact estimation.

There are also applications beyond breach impact estimation.  For example, they could be used in resilience planning and preparation.  They could also be used as part of information sharing in critical infrastructure to provide context for other information regarding threats, attacks, etc. (See this video of a Shmoocon session for a great panel discussion regarding challenges and opportunities regarding information sharing.)

Fairly soon, it would be good to define a light-weight standard format for Indicators of Impact, possibly as an extension to VERIS.  I also think that Indicators of Impact could be a good addition to the up-coming NIST Cybersecurity Framework.  There’s a public meeting April 3rd, and I might fly out for it.  But I will submit to the NIST RFI.

Your thoughts and comments?

New paper: "How Bad Is It? — A Branching Activity Model for Breach Impact Estimation"

Adam just posted a question about CEO “willingness to pay” (WTP) to avoid bad publicity regarding a breach event.  As it happens, we just submitted a paper to Workshop on the Economics of Information Security (WEIS) that proposes a breach impact estimation method that might apply to Adam’s question.  We use the WTP approach in a specific way, by posing this question to all affected stakeholders:

Ex ante, how much would you be willing to spend on response and recovery for a breach of a particular type?  Through what specific activities and processes?”

We hope this approach can bridge theoretical and empirical research, and also professional practice.  We also hope that this method can be used in public disclosures.

Paper: How Bad is it? – A Branching Activity Model to Estimate the Impact of Information Security Breaches

Infographic from the example in the paper
Infographic from the example in the paper

In the next few months we will be applying this to half a dozen historical breach episodes to see how it works out.  This model will also probably find its way into my dissertation as “substrate”.  The dissertation focus is on social learning and institutional innovation.

Comments and feedback are most welcome.

Fixes to Wysopal’s Application Security Debt Metric

In two recent blog posts (here and here), Chris Wysopal (CTO of Veracode) proposed a metric called “Application Security Debt”.  I like the general idea, but I have found some problems in his method.  In this post, I suggest corrections that will be both more credible and more accurate, at least for half of the formula.  The second half is harder to do right and needs more thinking.

Continue reading

Would a CISO benefit from an MBA education?

If a CISO is expected to be an executive officer (esp. for a large, complex technology- or information-centered organization), then he/she will need the MBA-level knowledge and skill. MBA is one path to getting those skills, at least if you are thoughtful and selective about the school you choose. Other paths are available, so it’s not just about an MBA credential.

Otherwise, if a CISO is essentially the Most Senior Information Security Manager, then MBA education wouldn’t be of much value.

Continue reading

Another critique of Ponemon's method for estimating 'cost of data breach'

I have fundamental objections to Ponemon’s methods used to estimate ‘indirect costs’ due to lost customers (‘abnormal churn’) and the cost of replacing them (‘customer acquisition costs’). These include sloppy use of terminology, mixing accounting and economic costs, and omitting the most serious cost categories.

Continue reading

Dashboards are Dumb

The visual metaphor of a dashboard is a dumb idea for management-oriented information security metrics. It doesn’t fit the use cases and therefore doesn’t support effective user action based on the information. Dashboards work when the user has proportional controllers or switches that correspond to each of the ‘meters’ and the user can observe the effect of using those controllers and switches in real time by observing the ‘meters’. Dashboards don’t work when there is a loose or ambiguous connection between the information conveyed in the ‘meters’ and the actions that users might take. Other visual metaphors should work better.

Continue reading

Navigation