Category: science

Why Don't We Have an Incident Repository?

Steve Bellovin and I provided some “Input to the Commission on Enhancing National Cybersecurity.” It opens:

We are writing after 25 years of calls for a “NTSB for Security” have failed to result in action. As early as 1991, a National Research Council report called for “build[ing] a repository of incident data” and said “one possible model for data collection is the incident reporting system administered by the National Transportation Safety Board.” [1] The calls for more data about incidents have continued, including by us [2, 3].

The lack of a repository of incident data impacts our ability to answer or assess many of your questions, and our key recommendation is that the failure to establish such a repository is, in and of itself, worthy of study. There are many factors in the realm of folklore as to why we do not have a repository, but no rigorous answer. Thus, our answer to your question 4 (“What can or should be done now or within the next 1-2 years to better address the challenges?”) is to study what factors have inhibited the creation of a repository of incident data, and our answer to question 5 (“what should be done over a decade?”) is to establish one. Commercial air travel is so incredibly safe today precisely because of decades of accident investigations, investigations that have helped plane manufacturers, airlines, and pilots learn from previous failures.

Journal of Terrorism and Cyber Insurance

At the RMS blog, we learn they are “Launching a New Journal for Terrorism and Cyber Insurance:”

Natural hazard science is commonly studied at college, and to some level in the insurance industry’s further education and training courses. But this is not the case with terrorism risk. Even if insurance professionals learn about terrorism in the course of their daily business, as they move into other positions, their successors may begin with hardly any technical familiarity with terrorism risk. It is not surprising therefore that, even fifteen years after 9/11, knowledge and understanding of terrorism insurance risk modeling across the industry is still relatively low.

There is no shortage of literature on terrorism, but much has a qualitative geopolitical and international relations focus, and little is directly relevant to terrorism insurance underwriting or risk management.

This is particularly exciting as Gordon Woo was recommended to me as the person to read on insurance math in new fields. His Calculating Catastrophe is comprehensive and deep.

It will be interesting to see who they bring aboard to complement the very strong terrorism risk team on the cyber side.

"Better Safe than Sorry!"

“Better safe than sorry” are the closing words in a NYT story, “A Colorado Town Tests Positive for Marijuana (in Its Water).”

Now, I’m in favor of safety, and there’s a tradeoff being made. Shutting down a well reduces safety by limiting the supply of water, and in this case, they closed a pool, which makes it harder to stay cool in 95 degree weather.

At Wired, Nick Stockton does some math, and says “IT WOULD TAKE A LOT OF THC TO CONTAMINATE A WATER SUPPLY.” (Shouting theirs.)

High-potency THC extract is pretty expensive. One hundred dollars for a gram of the stuff is not an unreasonable price. If this was an accident, it was an expensive one. If this was a prank, it was a financed by Bill Gates…Remember, the highest concentration of THC you can physically get in a liter of water is 3 milligrams.

Better safe than sorry is a tradeoff, and we should talk about it ask such.

Even without drinking the, ummm, kool-aid, this doesn’t pass the giggle test.

Security Lessons from Healthcare.gov

There’s a great “long read” at CIO, “6 Software Development Lessons From Healthcare.gov’s Failed Launch.” It opens:

This article tries to go further than the typical coverage of Healthcare.gov. The amazing thing about this story isn’t the failure. That was fairly obvious. No, the strange thing is the manner in which often conflicting information is coming out. Writing this piece requires some archeology: Going over facts and looking for inconsistencies to assemble the best information about what’s happened and pinpoint six lessons we might learn from it.

There’s a lot there, and I liked it even before lesson 6 (“Threat Modeling Matters”). Open analysis is generally better.

There’s a question of why this has to be done by someone like Matthew Heusser. No disrespect is intended, but why isn’t Healthcare.gov performing these analyses and sharing them? Part of the problem is that we live in an “outrage world” where it’s easier to point fingers and giggle in 140 characters and hurt people’s lives or careers than it is to make a positive contribution.

It would be great to see project analyses and attempts to learn from more projects that go sideways. But it would also be great to see these for security failures. As I asked in “What Happened At OPM,” we have these major hacks, and we learn nothing at all from them. (Or worse, we learn bad lessons, such as “don’t go looking for breaches.”)

The definition of insanity is doing the same thing over and over and hoping for different results. (Which may includes asking the same question or writing the same blog post over and over, which is why I’m starting a company to improve security effectiveness.)

What Happened At OPM?

I want to discuss some elements of the OPM breach and what we know and what we don’t. Before I do, I want to acknowledge the tremendous and justified distress that those who’ve filled out the SF-86 form are experiencing. I also want to acknowledge the tremendous concern that those who employ those with clearances must be feeling. The form is designed as an (inverted) roadmap to suborning people, and now all that data is in the hands of a foreign intelligence service.

The National Journal published A Timeline of Government Data Breaches:
OPM Data Breach

I asked after the root cause, and Rich Bejtlich responded “The root cause is a focus on locking doors and windows while intruders are still in the house” with a pointer to his “Continuous Diagnostic Monitoring Does Not Detect Hackers.”

And while I agree with Richard’s point in that post, I don’t think that’s the root cause. When I think about root cause, I think about approaches like Five Whys or Ishikawa. If we apply this sort of approach then we can ask, “Why were foreigners able to download the OPM database?” There are numerous paths that we might take, for example:

  1. Because of a lack of two-factor authentication (2FA)
  2. Why? Because some critical systems at OPM don’t support 2FA.
  3. Why? Because of a lack of budget for upgrades & testing (etc)

Alternately, we might go down a variety of paths based on the Inspector General Report. We might consider Richard’s point:

  1. A focus on locking doors and windows while intruders are still in the house.
  2. Why? Because someone there knows how to lock doors and windows.
  3. Why? Because lots of organizations hire out of government agencies.
  4. Why? Because they pay better
  5. [Alternate] Employees don’t like the clearance process

But we can go down alternate paths:

  1. A focus on locking doors and windows while intruders are still in the house.
  2. Why? Because finding intruders in the house is hard, and people often miss those stealthy attackers.
  3. Why? Because networks are chaotic and change frequently
  4. [Alternate] Because not enough people publish lists of IoCs, so defenders don’t know what to look for.

What I’d really like to see are specific technical facts laid out. (Or heck, if the facts are unknowable because logs rotated or attackers deleted them, or we don’t even know why we can’t know, let’s talk about that, and learn from it.)

OPM and Katherine Archuleta have already been penalized. Let’s learn things beyond dates. Let’s put the facts out there, or, as I quoted in my last post we “should declare the causes which impel them to the separation”, or “let Facts be submitted to a candid world.” Once we have facts about the causes, we can perform deeper root cause analysis.

I don’t think that the OIG report contains those causes. Each of those audit failings might play one of several roles. The failing might have been causal, and fixing it would have stopped the attack. The failing might have been casual and the attacker would have worked around it. The failing might be irrelevant (for example, I’ve rarely seen an authorization to operate prevent an attack, unless you fold it up very small and glue it into a USB port). The failings might even have been distracting, taking attention away from other work that might have prevented the attack.

A collection of public facts would enable us to have a discussion about those possibilities. We could have a meta-conversation about those categorizations of failings, and if there’s other ones which make more sense.

Alternately, we can keep going the way we’re going.

So. What happened at OPM?

Security Lessons from Drug Trials

When people don’t take their drugs as prescribed, it’s for very human reasons.

Typically they can’t tolerate the side effects, the cost is too high, they don’t perceive any benefit, or they’re just too much hassle.

Put these very human (and very subjective) reasons together, and they create a problem that medicine refers to as non-adherence. It’s an awkward term that describes a daunting problem: about 50% of people don’t take their drugs as prescribed, and this creates some huge downstream costs. Depending how you count it, non-adherence racks up between $100 billion and $280 billion in extra costs – largely due to a condition worsening and leading to more expensive treatments down the line.

So writes “Getting People To Take Their Medicine.” But Thomas Goetz is not simply griping about the problem, he’s presenting a study of ways to address it.

That’s important because we in information security also ask people to do things, from updating their software to trusting certain pixels and not other visually identical pixels, and they don’t do those things, also for very human reasons.

His conclusion applies almost verbatim to information security:

So we took especial interest in the researcher’s final conclusion: “It is essential that researchers stop re-inventing the poorly performing ‘wheels’ of adherence interventions.” We couldn’t agree more. It’s time to stop approaching adherence as a clinical problem, and start engaging with it as a human problem, one that happens to real people in their real lives. It’s time to find new ways to connect with people’s experiences and frustrations, and to give them new tools that might help them take what the doctor ordered.

If only information security’s prescriptions were backed by experiments as rigorous as clinical trials.

(I’ve previously shared Thomas Goetz’s work in “Fear, Information Security, and a TED Talk“)

Small thoughts on Doug Engelbart

I just re-read “A few words on Doug Engelbart.” If you’ve been reading the news lately, you’re probably seen a headline like “Douglas C. Engelbart, Inventor of the Computer Mouse, Dies at 88,” or seen him referred to as the fellow who gave the “mother of all demos.” But as Bret Victor points out, to focus on the mouse (or “The Demo”) is to miss the point. The mouse was, in a very important way, a spin-off from his real work.

The work that Engelbart cared about was how to augment human cognition. By finding the right problem, at the right time, Engelbart found himself in a position where the spin-offs from his research agenda were, of themselves, tremendously important. (The formulation of “the right problem, at the right time” comes from Hamming’s talk, “You and Your Research,” which is well worth reading. It’s also clear from the Augmentation paper that Engelbart had a staged approach in which he could build towards his final goal, aligning with Hamming’s “right way.”)

So when you hear people talking about the inventor of the mouse, you might give some thought to the question of what you can do to conceptualize your work so that you get important results and impact.

To make that more concrete, in my own case, the way I’m approaching information security is to ask “why do things go wrong so often?” This forces me to think about the ways and frequency that they go wrong, and what we can do about them. It also led me into thinking about how we can make security thinking more accessible, resulting in some games and our NEAT advice on better warnings.

Lunar Oribter Image Recovery Project

The Lunar Orbiter Image Recovery Project needs help to recover data from the Lunar Orbiter spacecraft.

Frankly, it’s a bit of a disgrace that Congress funds, well, all sorts of things, over this element of our history, but that’s besides the point. Do I want to get angry, or do I want to see this data preserved? Yes to both.

First View of Earth from Moon
That’s why I’ve given the project some money on Rockethub, and I urge you to do the same.

Navigation