Learning from Near Misses

[Update: Steve Bellovin has a blog post]

One of the major pillars of science is the collection of data to disprove arguments. That data gathering can include experiments, observations, and, in engineering, investigations into failures. One of the issues that makes security hard is that we have little data about large scale systems. (I believe that this is more important than our clever adversaries.) The work I want to share with you today has two main antecedents.

First, in the nearly ten years since Andrew Stewart and I wrote The New School of Information Security, and called for more learning from breaches, we’ve seen a dramatic shift in how people talk about breaches. Unfortunately, we’re still not learning as much as we could. There are structural reasons for that, primarily fear of lawsuits.

Second, last year marked 25 years of calls for an “NTSB for infosec.” Steve Bellovin and I wrote a short note asking why that was. We’ve spent the last year asking what else we might do. We’ve learned a lot about other Aviation Safety Programs, and think there are other models that may be better fits for our needs and constraints in the security realm.

Much that investigation has been a collaboration with Blake Reid, Jonathan Bair, and Andrew Manley of the University of Colorado Law School, and together we have a new draft paper on SSRN, “Voluntary Reporting of Cybersecurity Incidents.”

A good deal of my own motivation in this work is to engineer a way to learn more. The focus of this work, on incidents rather than breaches, and on voluntary reporting and incentives, reflects lessons learned as we try to find ways to measure real world security. The writing and abstract reflect the goal of influencing those outside security to help us learn better:

The proliferation of connected devices and technology provides consumers immeasurable amounts of convenience, but also creates great vulnerability. In recent years, we have seen explosive growth in the number of damaging cyber-attacks. 2017 alone has seen the Wanna Cry, Petya, Not Petya, Bad Rabbit, and of course the historic Equifax breach, among many others. Currently, there is no mechanism in place to facilitate understanding of these threats, or their commonalities. While information regarding the causes of major breaches may become public after the fact, what is lacking is an aggregated data set, which could be analyzed for research purposes. This research could then provide clues as to trends in both attacks and avoidable mistakes made on the part of operators, among other valuable data.

One possible regime for gathering such information would be to require disclosure of events, as well as investigations into these events. Mandatory reporting and investigations would result better data collection. This regime would also cause firms to internalize, at least to some extent, the externalities of security. However, mandatory reporting faces challenges that would make this regime difficult to implement, and possibly more costly than beneficial. An alternative is a voluntary reporting scheme, modeled on the Aviation Safety Reporting System housed within NASA, and possibly combined with an incentive scheme. Under it, organizations that were the victims of hacks or “near misses” would report the incident, providing important details, to some neutral party. This database could then be used both by researchers and by industry as a whole. People could learn what does work, what does not work, and where the weak spots are.

Please, take a look at the paper. I’m eager to hear your feedback.

The New School of Air Travel Security?

As I simmer with anger over how TSA is subpoening bloggers, it occurs to me that the state of airline security is very similar to that of information security in some important ways:

  • Failures are rare
  • Partial failures are generally secret
  • Actual failures are analyzed in secret
  • Procedures are secret
  • Procedures seem bizarre and arbitrary
  • External analysis seems to show that the procedures are fundamentally flawed
  • Those charged with doing the work appear to develop a bunker mentality

In this situation, anyone can offer up their opinions, and most of us do.

It’s hard to figure out which analysis are better than others, because the data about partial failures is harder to get than opinions. And so most opinions are created and appear equal. Recommendations in airline security are all ‘best practices’ which are hard to evaluate.

Now, as Peter Swire has pointed out, the disclosure debate pivots on if an attacker needs to expose themselves in order to test a hypothesis. If the attacker needs to show up and risk arrest or being shot to understand if a device will make it through a magnometer, that’s very different than if an attacker needs to send packets over the internet.

I believe much of this swivels on the fact that most of the security layers have been innocently exposed in many ways. The outline of how the intelligence agencies and their databases work is public. The identity checking is similarly public. It’s easy to discover at home or at the airport that you’re on a list. The primary and secondary physical screening layers are well and publicly described. The limits of tertiary screening are easily discovered, as an unlucky friend discovered when he threw a nazi salute at a particularly nosy screener in Amsterdam’s Schiphol airport. And then some of it comes out when government agencies accidentally expose it. All of this boils down to partial and unstructured disclosure in three ways:

  1. Laws or public inquiries require it
  2. The public is exposed to it or can “innocently” test it
  3. Accidents

In light of all of this, the job of a terrorist mastermind is straightforward: figure out a plan that bypasses the known defenses, then find someone to carry it out. Defending the confidentiality of approaches is hard. Randomization is an effort to change attacker’s risk profiles.

But here’s the thing: between appropriate and important legal controls and that the public goes through the system, there are large parts of it which cannot be kept secret for any length of time. We need to acknowledge that and design for it.

So here’s my simple proposal:

  1. Publish as much of the process as can be published, in accordance with the intent of Executive Order on Classified National Security Information:

    “Agency heads shall complete on a periodic basis a comprehensive review of the agency’s classification guidance, particularly classification guides, to ensure the guidance reflects current circumstances and to identify classified information that no longer requires protection and can be declassified,”

    That order lays out a new balance between openness and national security, including terrorism. TSA’s current approach does not meet that new balance.

  2. Publish information about failed attempts and the costs of the system
  3. Stop harassing and intimidating those like Chris Soghoian, Steven Frischling or Christopher Elliott who discuss details of the system.
  4. Encourage and engage in a fuller debate with facts, rather than speculation.

There you have it. We will get better security through a broad set of approaches being brought to the problems. We will get easier travel because we will understand what we’re being asked to do and why. Everyone understand we need some level of security for air travel. Without an acrimonious, ill-informed firestorm, we’ll get more security with less pain and distraction.

What should the new czar do? (Tanji's Security Survey)

Over at Haft of the Spear, Michael Tanji asks:

You are the nation’s new cyber czar/shogun/guru. You know you can’t _force _anyone to do jack, therefore you spend your time/energy trying to accomplish what three things via influence, persuasion, shame and force of will?

I think it’s a fascinating question, and posted my answer over at the New School blog.

"No Evidence" and Breach Notice

According to ZDNet, “Coleman donor data breached in January, but donors alerted by Wikileaks not campaign:”

Donors to Minnesota Senator Norm Coleman’s campaign got a rude awakening this week, thanks to an email from Wikileaks. Coleman’s campaign was keeping donor information in an unprotected database that contained names, addresses, emails, credit card numbers and those three-digit codes on the back of cards, Wikileaks told donors in an email.

and

We contacted federal authorities at that time, and they reviewed logs from the server in question as well as additional firewall logs. They indicated that, after reviewing those logs, they did not find evidence that our database was downloaded by any unauthorized party.

I wanted to bring this up, not to laugh at Coleman (that’s Franken’s job, after all), but because we frequently see assertions that “there’s no evidence that…”

As anyone trained in any science knows, absence of evidence is not evidence of absence. At the same time, sometimes there really is sufficient evidence, properly protected, that allows that claim to be made. We need public, documented and debated standards of how such decisions should be made. With such standards, organizations could better make decisions about risk. Additionaly, both regulators and the public could be more comfortable that those piping up about risk were not allowing the payers to call the tune.

Security is about outcomes: RSA edition

garner-hard-drive-crusher.jpgSo last week I asked what people wanted to get out of RSA, and the answer was mostly silence and snark. There are some good summaries of RSA at securosis and Stiennon’s network world blog, so I won’t try to do that.

But I did I promise to tell you what I wanted to get out of it. My goals, ordered:

  1. A successful Research Revealed track. I think we had some great talks, a panel I’m not qualified to judge (since I was on it), and at least a couple of sell-out sessions. But you tell me. Did it work for you?
  2. See interesting new technology. I saw three things: Garner’s hard driver crusher (they have a “destroy” button!), Camouflage‘s database masking and some very cool credit card form factor crypto devices from Emue. (I’d add Verizon’s DBIR, but I saw that before the show.) Four interesting bits? Counts as success. Ooh, plus saw the Aptera car.
  3. Announce our new blog at Newschoolsecurity.com. Done!
  4. See friends and make five new ones. It turns out that the most successful part of this was my Open Security Foundation t-shirt. I urge you all to donate and get this highly effective networking tool.
  5. Connect five pairs of people who previously didn’t know each other. I counted seven, which makes me really happy.

What I didn’t want: a hangover. Only had one, Friday morning.

Will The Real Adam Shostack Please Stand Up?

fakeadamshostack.JPG
At one point during the RSA party hopping last week, Adam, Alex and I ended up at the Executive Women’s Forum event. I was feelng pretty punchy and decided that all three of us should have name tags that read “Adam Shostack”. If anyone asked, I just explained that we were promoting the new blog. Eventually I wandered off to another party and some other folks decided that this was a really good idea as well. By the time I got back to the W, there was a whole slew of Adam’s floating around. Those who subscribe to the “Pictures or It Didn’t Happen” school of thought can find all the evidence over on fickr photostream.

The New School Blog

I’m really excited to announce NewSchoolSecurity.com, the blog inspired by the book. I’ll be blogging with Alex Hutton, Chandler Howell and Brooke Paul. And who knows, maybe we’ll even get a post or two from Andrew?

Emergent Chaos will continue. My posts here will be a little more on the privacy, liberty and economics end of things, with my technical and business security split between The New School.

All that said, I’ve posted the followup to “ Security is about outcomes, not about process” on The New School, which you can read at “Events don’t happen in a Vacuum.”

Building Security In, Maturely

While I was running around between the Berkeley Data Breaches conference and SOURCE Boston, Gary McGraw and Brian Chess were releasing the Building Security In Maturity Model.

Lots has been said, so I’d just like to quote one little bit:

One could build a maturity model for software security theoretically (by pondering what organizations should do) or one could build a maturity model by understanding what a set of distinct organizations have already done successfully. The latter approach is both scientific and grounded in the real world, and is the one we followed.

It’s long, but an easy and worthwhile read if you’re thinking of putting together or improving your software security practice.

Incidentally, my boss also commented on our work blog “Building Security In Maturity Model on the SDL Blog.”