Bicycling & Risk

While everyone else is talking about APT, I want to talk about risk thinking versus outcome thinking.

I have a lot of colleagues who I respect who like to think about risk in some fascinating ways. For example, there’s the Risk Hose and SIRA folks.
I’m inspired by
To Encourage Biking, Cities Lose the Helmets:

In the United States the notion that bike helmets promote health and safety by preventing head injuries is taken as pretty near God’s truth. Un-helmeted cyclists are regarded as irresponsible, like people who smoke. Cities are aggressive in helmet promotion.

But many European health experts have taken a very different view: Yes, there are studies that show that if you fall off a bicycle at a certain speed and hit your head, a helmet can reduce your risk of serious head injury. But such falls off bikes are rare — exceedingly so in mature urban cycling systems.

On the other hand, many researchers say, if you force or pressure people to wear helmets, you discourage them from riding bicycles. That means more obesity, heart disease and diabetes. And — Catch-22 — a result is fewer ordinary cyclists on the road, which makes it harder to develop a safe bicycling network. The safest biking cities are places like Amsterdam and Copenhagen, where middle-aged commuters are mainstay riders and the fraction of adults in helmets is minuscule.

“Pushing helmets really kills cycling and bike-sharing in particular because it promotes a sense of danger that just isn’t justified.

Given that we don’t have statistics about infosec analogs to head injuries, nor obesity, I’m curious where can we make the best infosec analogy to bicycling and helmets? Where are our outcomes potentially worse because we focus on every little risk?

My favorite example is password change policies, where we absorb substantial amounts of everyone’s time without evidence that they’ll improve our outcomes.

What’s yours?

MD5s, IPs and Ultra

So I was listening to the Shmoocon presentation on information sharing, and there was a great deal of discussion of how sharing too much information could reveal to an attacker that they’d been detected. I’ve discussed this problem a bit in “The High Price of the Silence of Cyberwar,” but wanted to talk more about it. What struck me is that the audience seemed to be thinking that an MD5 of a bit of malware was equivalent to revealing the Ultra intelligence taken from Enigma decrypts.

Now perhaps that’s because I’m re-reading Neal Stephenson’s Cryptonomicon, where one of the subplots follows the exploits of Unit 2702, dedicated to ensuring that use of Ultra is explainable in other ways.

But really, it was pretty shocking to hear people nominally dedicated to the protection of systems actively working to deny themselves information that might help them detect an intrusion faster and more effectively.

For an example of how that might work, read “Protecting People on Facebook.” First, let me give kudos to Facebook for revealing an attack they didn’t have to reveal. Second, Facebook says “we flagged a suspicious domain in our corporate DNS logs.” What is a suspicious domain? It may or may not be one not seen before. More likely, it’s one that some other organization has flagged as malicious. When organizations reveal the IP or domain names of command and control servers, it gives everyone a chance to learn if they’re compromised. It can have other positive effects. Third, it reveals a detection method which actually caught a bad guy, and that you might or might not be using. Now you can consider if you want to invest in dns logging.

Now, there’s a time to be quiet during incident response. But there’s very real a tradeoff to be made between concealing your knowledge of a breach and aiding and abetting other breaches.

Maybe it’s time for us to get angry when a breach disclosure doesn’t include at least one IP and one MD5? Because when the disclosure doesn’t include those facts, our ability to defend ourselves is dramatically reduced.

New School Thinking At Davos

This week I have experienced an echo of this pattern at the 2013 WEF meeting. But this time my unease does not revolve around any financial threats, but another issue – cyber security.

[The] crucial point is this: even if some companies are on top of the issue, others are not, and without more public debate, it will be tough to get boards to act. Without more disclosure it will also be difficult for investors to start pricing in these risks. So it is high time shareholders began demanding more information from companies about the issue – not just about the scale of the cyber attacks, but also the moves being taken to fend them off.

And if companies refuse to answer, then shareholders – or the government – should ask them why. After all, if there is one thing we learnt from 2007, it is that maintaining an embarrassed silence about risks does not usually make them go away; least of all when there is potential damage to consumers (and investors) as well as the companies under attack.

So writes Gillian Tett in the Financial Times, “Time to break wall of silence on escalating cyber attacks

Thanks to Russell Thomas for the pointer.

On Cookie Blocking

It would not be surprising if an article like “Firefox Cookie-Block Is The First Step Toward A Better Tomorrow” was written by a privacy advocate. And it may well have been. But this privacy advocate is also a former chairman of the Internet Advertising Bureau. (For their current position, see “Randall Rothenberg’s Statement Opposing Mozilla’s Intention to Block Third-Party Cookies.”

But quoting from “the first step:”

First, the current promise of ultra-targeted audiences delivered in massively efficient ways is proving to be one of the more empty statements in recent memory. Every day more data shows that what is supposed to be happening is far from reality. Ad impressions are not actually in view, targeting data is, on average, 50% inaccurate by some measures (even for characteristics like gender) and, all too often, the use of inferred targeting while solving for low-cost clicks produces cancerous placements for the marketer. At the end of the day, the three most important players in the ecosystem – the visitor, the content creator and the marketer – are all severely impaired, or even negatively impacted, by these practices.

It’s a quick read, and fascinating when you consider the source.

Indicators of Impact — Ground Truth for Breach Impact Estimation

Ice bag might be a good Indicator of Impact for a night of excess.
Ice bag might be a good ‘Indicator of Impact’ for a night of excess.

One big problem with existing methods for estimating breach impact is the lack of credibility and reliability of the evidence behind the numbers. This is especially true if the breach is recent or if most of the information is not available publicly.  What if we had solid evidence to use in breach impact estimation?  This leads to the idea of “Indicators of Impact” to provide ‘ground truth’ for the estimation process.

It is premised on the view that breach impact is best measured by the costs or resources associated with response, recovery, and restoration actions taken by the affected stakeholders.  These activities can included both routine incident response and also more rare activities.  (See our paper for more.)  This leads to to ‘Indicators of Impact’, which are  evidence of the existence or non-existence of these activities. Here’s a definition (p 23 of our paper):

An ‘Indicator of Impact’ is an observable event, behavior, action, state change, or communication that signifies that the breached or affected organizations are attempting to respond, recover, restore, rebuild, or reposition because they believe they have been harmed. For our purposes, Indicators of Impact are evidence that can be used to estimate branching activity models of breach impact, either the structure of the model or key parameters associated with specific activity types. In principle, every Indicator of Impact is observable by someone, though maybe not outside the breached organization.

Of course, there is a close parallel to the now-widely-accepted idea of “Indicators of Compromise“, which are basically technical traces associated with a breach event.  There’s a community supporting an open exchange format — OpenIoC.  The big difference is that Indicators of Compromise are technical and are used almost exclusively in tactical information security.  In contrast, Indicators of Impact are business-oriented, even if they involve InfoSec activities, and are used primarily for management decisions.

From the Appendix B, here are a few examples:

  • Was there a forensic investigation, above and beyond what your organization would normally do?
  • Was this incident escalated to the executive level (VP or above), requiring them to make resource decisions or to spend money?
  • Was any significant business process or function disrupted for a significant amount of time?
  • Due to the breach, did the breached organization fail to meet any contractual obligations with it customers, suppliers, or partners? If so, were contractual penalties imposed?
  • Were top executives or the Board significantly diverted by the breach and aftermath, such that other important matters did not receive sufficient attention?

The list goes on for three pages in Appendix B but we fully expect it to grow much longer as we get experience and other people start participating.  For example, there will be indicators that only apply to certain industries or organization types.  In my opinion, there is no reason to have a canonical list or a highly structured taxonomy.

As signals, the Indicators of Impact are not perfect, nor do they individually provide sufficient evidence.  However, they have the very great benefit of being empirical, subject to documentation and validation, and potentially observable in many instances, even outside of InfoSec breach events.  In other words, they provide a ‘ground truth’ which has been sorely lacking in breach impact estimation. When assembled as a mass of evidence and using appropriate inference and reasoning methods (e.g. see this great book), Indicators of Impact could provide the foundation for robust breach impact estimation.

There are also applications beyond breach impact estimation.  For example, they could be used in resilience planning and preparation.  They could also be used as part of information sharing in critical infrastructure to provide context for other information regarding threats, attacks, etc. (See this video of a Shmoocon session for a great panel discussion regarding challenges and opportunities regarding information sharing.)

Fairly soon, it would be good to define a light-weight standard format for Indicators of Impact, possibly as an extension to VERIS.  I also think that Indicators of Impact could be a good addition to the up-coming NIST Cybersecurity Framework.  There’s a public meeting April 3rd, and I might fly out for it.  But I will submit to the NIST RFI.

Your thoughts and comments?

New paper: "How Bad Is It? — A Branching Activity Model for Breach Impact Estimation"

Adam just posted a question about CEO “willingness to pay” (WTP) to avoid bad publicity regarding a breach event.  As it happens, we just submitted a paper to Workshop on the Economics of Information Security (WEIS) that proposes a breach impact estimation method that might apply to Adam’s question.  We use the WTP approach in a specific way, by posing this question to all affected stakeholders:

Ex ante, how much would you be willing to spend on response and recovery for a breach of a particular type?  Through what specific activities and processes?”

We hope this approach can bridge theoretical and empirical research, and also professional practice.  We also hope that this method can be used in public disclosures.

Paper: How Bad is it? – A Branching Activity Model to Estimate the Impact of Information Security Breaches

Infographic from the example in the paper
Infographic from the example in the paper

In the next few months we will be applying this to half a dozen historical breach episodes to see how it works out.  This model will also probably find its way into my dissertation as “substrate”.  The dissertation focus is on social learning and institutional innovation.

Comments and feedback are most welcome.

Paying for Privacy: Enterprise Breach Edition

We all know how companies don’t want to be named after a breach. Here’s a random question: how much is that worth to a CEO? What would a given organization be willing to pay to keep its name out of the press? (A-priori, with at best a prediction of how the press will react.) Please don’t say a lot, please help me quantify it.

Another way to ask this question: What should a business be willing to pay to not report a security breach?

(Bonus question: how is it changing over time?)

Lunar Oribter Image Recovery Project

The Lunar Orbiter Image Recovery Project needs help to recover data from the Lunar Orbiter spacecraft.

Frankly, it’s a bit of a disgrace that Congress funds, well, all sorts of things, over this element of our history, but that’s besides the point. Do I want to get angry, or do I want to see this data preserved? Yes to both.

First View of Earth from Moon
That’s why I’ve given the project some money on Rockethub, and I urge you to do the same.