“…the Elusive Goal of Security as a Scientific Pursuit”

That’s the subtitle of a new paper by Cormac Herley and Paul van Oorschot, “SoK: Science, Security, and the Elusive Goal of Security as a Scientific Pursuit,” forthcoming in IEEE Security & Privacy.

The past ten years has seen increasing calls to make security research more “scientific”. On the surface, most agree that this is desirable, given universal recognition of “science” as a positive force. However, we find that there is little clarity on what “scientific” means in the context of computer security research, or consensus on what a “Science of Security” should look like. We selectively review work in the history and philosophy of science and more recent work under the label “Science of Security”. We explore what has been done under the theme of relating science and security, put this in context with historical science, and offer observations and insights we hope may motivate further exploration and guidance. Among our findings are that practices on which the rest of science has reached consensus appear little used or recognized in security, and a pattern of methodological errors continues unaddressed.

Do Games Teach Security?

There’s a new paper from Mark Thompson and Hassan Takabi of the University of North Texas. The title captures the question:
Effectiveness Of Using Card Games To Teach Threat Modeling For Secure Web Application Developments

Gamification of classroom assignments and online tools has grown significantly in recent years. There have been a number of card games designed for teaching various cybersecurity concepts. However, effectiveness of these card games is unknown for the most part and there is no study on evaluating their effectiveness. In this paper, we evaluate effectiveness of one such game, namely the OWASP Cornucopia card game which is designed to assist software development teams identify security requirements in Agile, conventional and formal development
processes. We performed an experiment where sections of graduate students and undergraduate students in a security related course at our university were split into two groups, one of which played the Cornucopia card game, and one of which did not. Quizzes were administered both before and after the activity, and a survey was taken to measure student attitudes toward the exercise. The results show that while students found the activity useful and would like to see this activity and more similar exercises integrated into the classroom, the game was not easy to understand. We need to spend enough time to familiarize the students with the game and prepare them for the exercises using the game to get the best results.

I’m very glad to see games like Cornucopia evaluated. If we’re going to push the use of Cornucopia (or Elevation of Privilege) for teaching, then we ought to be thinking about how well they work in comparison to other techniques. We have anecdotes, but to improve, we must test and measure.

Incentives, Insurance and Root Cause

Over the decade or so since The New School book came out, there’s been a sea change in how we talk about breaches, and how we talk about those who got breached. We agree that understanding what’s going wrong should be a bigger part of how we learn. I’m pleased to have played some part in that movement.

As I consider where we are today, a question that we can’t answer sufficiently is “what’s in it for me?” “Why should I spend time on this?” The benefits may take too long to appear. And so we should ask what we could do about that. In that context, I am very excited to see a proposal from Rob Knake on “Creating a Federally Sponsored Cyber Insurance Program.”

He suggests that a full root cause analysis would be a condition of Federal insurance backstop:

The federally backstopped cyber insurance program should mandate that companies allow full breach investigations, which include on-site gathering of data on why the attack succeeded, to help other companies prevent similar attacks. This function would be similar to that performed by the National Transportation Safety Board (NTSB) for aviation incidents. When an incident occurs, the NTSB establishes the facts of the incident and makes recommendations to prevent similar incidents from occurring. Although regulators typically establish new requirements upon the basis of NTSB recommendations, most air carriers implement recommendations on a voluntary basis. Such a virtuous cycle could happen in cybersecurity if companies covered by a federal cyber insurance program had their incidents investigated by a new NTSB-like entity, which could be run by the private sector and funded by insurance companies.

Why Don't We Have an Incident Repository?

Steve Bellovin and I provided some “Input to the Commission on Enhancing National Cybersecurity.” It opens:

We are writing after 25 years of calls for a “NTSB for Security” have failed to result in action. As early as 1991, a National Research Council report called for “build[ing] a repository of incident data” and said “one possible model for data collection is the incident reporting system administered by the National Transportation Safety Board.” [1] The calls for more data about incidents have continued, including by us [2, 3].

The lack of a repository of incident data impacts our ability to answer or assess many of your questions, and our key recommendation is that the failure to establish such a repository is, in and of itself, worthy of study. There are many factors in the realm of folklore as to why we do not have a repository, but no rigorous answer. Thus, our answer to your question 4 (“What can or should be done now or within the next 1-2 years to better address the challenges?”) is to study what factors have inhibited the creation of a repository of incident data, and our answer to question 5 (“what should be done over a decade?”) is to establish one. Commercial air travel is so incredibly safe today precisely because of decades of accident investigations, investigations that have helped plane manufacturers, airlines, and pilots learn from previous failures.

Journal of Terrorism and Cyber Insurance

At the RMS blog, we learn they are “Launching a New Journal for Terrorism and Cyber Insurance:”

Natural hazard science is commonly studied at college, and to some level in the insurance industry’s further education and training courses. But this is not the case with terrorism risk. Even if insurance professionals learn about terrorism in the course of their daily business, as they move into other positions, their successors may begin with hardly any technical familiarity with terrorism risk. It is not surprising therefore that, even fifteen years after 9/11, knowledge and understanding of terrorism insurance risk modeling across the industry is still relatively low.

There is no shortage of literature on terrorism, but much has a qualitative geopolitical and international relations focus, and little is directly relevant to terrorism insurance underwriting or risk management.

This is particularly exciting as Gordon Woo was recommended to me as the person to read on insurance math in new fields. His Calculating Catastrophe is comprehensive and deep.

It will be interesting to see who they bring aboard to complement the very strong terrorism risk team on the cyber side.

"Better Safe than Sorry!"

“Better safe than sorry” are the closing words in a NYT story, “A Colorado Town Tests Positive for Marijuana (in Its Water).”

Now, I’m in favor of safety, and there’s a tradeoff being made. Shutting down a well reduces safety by limiting the supply of water, and in this case, they closed a pool, which makes it harder to stay cool in 95 degree weather.

At Wired, Nick Stockton does some math, and says “IT WOULD TAKE A LOT OF THC TO CONTAMINATE A WATER SUPPLY.” (Shouting theirs.)

High-potency THC extract is pretty expensive. One hundred dollars for a gram of the stuff is not an unreasonable price. If this was an accident, it was an expensive one. If this was a prank, it was a financed by Bill Gates…Remember, the highest concentration of THC you can physically get in a liter of water is 3 milligrams.

Better safe than sorry is a tradeoff, and we should talk about it ask such.

Even without drinking the, ummm, kool-aid, this doesn’t pass the giggle test.

Security Lessons from Healthcare.gov

There’s a great “long read” at CIO, “6 Software Development Lessons From Healthcare.gov’s Failed Launch.” It opens:

This article tries to go further than the typical coverage of Healthcare.gov. The amazing thing about this story isn’t the failure. That was fairly obvious. No, the strange thing is the manner in which often conflicting information is coming out. Writing this piece requires some archeology: Going over facts and looking for inconsistencies to assemble the best information about what’s happened and pinpoint six lessons we might learn from it.

There’s a lot there, and I liked it even before lesson 6 (“Threat Modeling Matters”). Open analysis is generally better.

There’s a question of why this has to be done by someone like Matthew Heusser. No disrespect is intended, but why isn’t Healthcare.gov performing these analyses and sharing them? Part of the problem is that we live in an “outrage world” where it’s easier to point fingers and giggle in 140 characters and hurt people’s lives or careers than it is to make a positive contribution.

It would be great to see project analyses and attempts to learn from more projects that go sideways. But it would also be great to see these for security failures. As I asked in “What Happened At OPM,” we have these major hacks, and we learn nothing at all from them. (Or worse, we learn bad lessons, such as “don’t go looking for breaches.”)

The definition of insanity is doing the same thing over and over and hoping for different results. (Which may includes asking the same question or writing the same blog post over and over, which is why I’m starting a company to improve security effectiveness.)

What Happened At OPM?

I want to discuss some elements of the OPM breach and what we know and what we don’t. Before I do, I want to acknowledge the tremendous and justified distress that those who’ve filled out the SF-86 form are experiencing. I also want to acknowledge the tremendous concern that those who employ those with clearances must be feeling. The form is designed as an (inverted) roadmap to suborning people, and now all that data is in the hands of a foreign intelligence service.

The National Journal published A Timeline of Government Data Breaches:
OPM Data Breach

I asked after the root cause, and Rich Bejtlich responded “The root cause is a focus on locking doors and windows while intruders are still in the house” with a pointer to his “Continuous Diagnostic Monitoring Does Not Detect Hackers.”

And while I agree with Richard’s point in that post, I don’t think that’s the root cause. When I think about root cause, I think about approaches like Five Whys or Ishikawa. If we apply this sort of approach then we can ask, “Why were foreigners able to download the OPM database?” There are numerous paths that we might take, for example:

  1. Because of a lack of two-factor authentication (2FA)
  2. Why? Because some critical systems at OPM don’t support 2FA.
  3. Why? Because of a lack of budget for upgrades & testing (etc)

Alternately, we might go down a variety of paths based on the Inspector General Report. We might consider Richard’s point:

  1. A focus on locking doors and windows while intruders are still in the house.
  2. Why? Because someone there knows how to lock doors and windows.
  3. Why? Because lots of organizations hire out of government agencies.
  4. Why? Because they pay better
  5. [Alternate] Employees don’t like the clearance process

But we can go down alternate paths:

  1. A focus on locking doors and windows while intruders are still in the house.
  2. Why? Because finding intruders in the house is hard, and people often miss those stealthy attackers.
  3. Why? Because networks are chaotic and change frequently
  4. [Alternate] Because not enough people publish lists of IoCs, so defenders don’t know what to look for.

What I’d really like to see are specific technical facts laid out. (Or heck, if the facts are unknowable because logs rotated or attackers deleted them, or we don’t even know why we can’t know, let’s talk about that, and learn from it.)

OPM and Katherine Archuleta have already been penalized. Let’s learn things beyond dates. Let’s put the facts out there, or, as I quoted in my last post we “should declare the causes which impel them to the separation”, or “let Facts be submitted to a candid world.” Once we have facts about the causes, we can perform deeper root cause analysis.

I don’t think that the OIG report contains those causes. Each of those audit failings might play one of several roles. The failing might have been causal, and fixing it would have stopped the attack. The failing might have been casual and the attacker would have worked around it. The failing might be irrelevant (for example, I’ve rarely seen an authorization to operate prevent an attack, unless you fold it up very small and glue it into a USB port). The failings might even have been distracting, taking attention away from other work that might have prevented the attack.

A collection of public facts would enable us to have a discussion about those possibilities. We could have a meta-conversation about those categorizations of failings, and if there’s other ones which make more sense.

Alternately, we can keep going the way we’re going.

So. What happened at OPM?

The Future Is So Cool

When you were growing up, 2014 was the future. And it’s become cliche to bemoan that we don’t have the flying cars we were promised, but did get early delivery on a dystopian surveillance state.

So living here in the future, I just wanted to point out how cool it is that you can detect extrasolar planets with a home kit.

A camera mounted on a clever set of hinges to track the sky

Read the story at IEEE Spectrum: DIY Exoplanet Detector.