Averting the Drift into Failure

This is a fascinating video from the Devops Enterprise Summit:

“the airline that reports more incidents has a lower passenger mortality rate. Now what’s fascinating about this … we see this replicated this data across various domains, construction, retail, and we see that there is this inverse correlation between the number of incidents reported, the honesty, the willingness to take on that conversation about what might go wrong and things actually going wrong.”

The speaker’s website is sidneydekker.com/, there’s some really interesting material.

Calls for an NTSB?

In September, Steve Bellovin and I asked “Why Don’t We Have an Incident Repository?.”

I’m continuing to do research on the topic, and I’m interested in putting together a list of such things. I’d like to ask you for two favors.

First, if you remember such things, can you tell me about it? I recall “Computers at Risk,” the National Cyber Leap Year report, and the Bellovin & Neumann editorial in IEEE S&P. Oh, and “The New School of Information Security.” But I’m sure there have been others.

In particular, what I’m looking for are calls like this one in Computers at Risk (National Academies Press, 1991):

3a. Build a repository of incident data. The committee recommends that a repository of incident information be established for use in research, to increase public awareness of successful penetrations and existing vulnerabilities, and to assist security practitioners, who often have difficulty persuading managers to invest in security. This database should categorize, report, and track pertinent instances of system security-related threats, risks, and failures. […] One possible model for data collection is the incident reporting system administered by the National Transportation Safety Board… (chapter 3)

Second, I am trying to do searches such as “cites “Computers at Risk” and contains ‘NTSB’.” I have tried without luck to do this on Google Scholar, Microsoft Academic and Semantic Scholar. Only Google seems to be reliably identifying that report. Is there a good way to perform such a search?

You say noise, I say data

There is a frequent claim that stock markets are somehow irrational and unable to properly value the impact of cyber incidents in pricing. (That’s not usually precisely how people phrase it. I like this chart of one of the largest credit card breaches in history:

Target Stock

It provides useful context as we consider this quote:

On the other hand, frequent disclosure of insignificant cyberincidents could overwhelm investors and harm a company’s stock price, said Eric Cernak, cyberpractice leader at the U.S. division of German insurer Munich Re. “If every time there’s unauthorized access, you’re filing that with the SEC, there’s going to be a lot of noise,” he said.
(Corporate Judgment Call: When to Disclose You’ve Been Hacked, Tatyana Shumsky, WSJ)

Now, perhaps Mr. Cernak’s words been taken out of context. After all, it’s a single sentence in a long article, and the lead-in, which is a paraphrase, may confuse the issue.

I am surprised that an insurer would be opposed to having more data from which they can try to tease out causative factors.

Image from The Langner group. I do wish it showed the S&P 500.

Security Lessons From Star Wars: Breach Response

To celebrate Star Wars Day, I want to talk about the central information security failure that drives Episode IV: the theft of the plans.

First, we’re talking about really persistent threats. Not like this persistence, but the “many Bothans died to bring us this information” sort of persistence. Until members of Comment Crew are going missing, we need a term more like ‘pesky’ to help us keep perspective.

Kellman Meghu has pointed out that once the breach was detected, the Empire got off to a good start on responding to it. They were discussing risk before they descend into bickering over ancient religions.

But there’s another failure which happens, which is that knowledge of the breach apparently never leaves that room, and there’s no organized activity to consider questions such as:

  • Can we have a red team analyze the plans for problems? This would be easy to do with a small group.
  • Should we re-analyze our threat model for this Death Star?
  • Is anyone relying on obscurity for protection? This would require letting the engineering organization know about the issue, and asking people to step forward if the plans being stolen impacts security. (Of course, we all know that the Empire is often intolerant, and there might be a need for an anonymous drop box.)

If the problem hadn’t been so tightly held, the Empire might not have gotten here:

Tarkin bast

General Bast: We’ve analyzed their attack, sir, and there is a danger. Should I have your ship standing by?

Grand Moff Tarkin: Evacuate? In our moment of triumph? I think you overestimate their chances.

There are a number of things that might have been done had the Empire known about the weakly shielded exhaust port. For example, they might have welded some steel beams across that trench. They might put some steel plating up near the exhaust port. They might land a Tie Fighter in the trench. The could deploy some storm troopers with those tripod mounted guns that never quite seem to hit the Millenium Falcon. Maybe it’s easier in a trench. I’m not sure.

What I am sure of is there’s all sorts of responses, and all of them depend on information leaving the hands of those six busy executives. The information being held too closely magnified the effect of those Bothan spies.

So this May the Fourth, ask yourself: is there information that you could share more widely to help defend your empire?

Exploit Kit Statistics

On a fairly regular basis, I come across pages like this one from SANS, which contain fascinating information taken from exploit kit control panels:

Exploit Kit Control panel

There’s all sorts of interesting numbers in that picture. For example, the success rate for owning XP machines (19.61%) is three times that of Windows 7. (As an aside, the XP number is perhaps lower than “common wisdom” in the security community would have it.) There are also numbers for the success rates of exploits, ranging from Java OBE at 35% down to MDAC at 1.85%.

That’s not the only captured control panel. There’s more, for example, M86, Spider Labs and webroot.

I’m fascinated by these numbers, and have two questions:

  • Is anyone capturing the statistics shown and running statistics over time?
  • Is there an aggregation of all these captures? If not, what are the best search terms to find them?

Analyzing The Army's Accidental Test

According to Wired, “Army Practices Poor Data Hygiene on Its New Smartphones, Tablets.” And I think that’s awesome. No, really, not the ironic sort of awesome, but the awesome sort of awesome, because what the Army is doing is a large scale natural experiment in “does it matter?”

Over the next n months, the Pentagon’s IG can compare incidents in the Army to those in the Navy and the Air Force, and see who’s doing better and who’s doing worse. In theory, the branches of the military should all be otherwise roughly equivalent in security practice and culture (compared to, say, Twitter’s corporate culture, or that of Goldman Sachs.)

With that data, they can assess if the compliance standards for smartphones make a difference, and what difference they make.

So I’d like to call on the Army to not remediate any of the findings for 30 or 60 days. I’d like to call on the Pentagon IG to analyze incidents in a comparative way, and let us know what he finds.

Update: I wanted to read the report, which, as it turns out, has been taken offline. (See Consolidated Listing of Reports, which says “Report Number DODIG-2013-060, Improvements Needed With Tracking and Configuring Army Commercial Mobile Devices, issued March 26, 2013, has been temporarily removed from this website pending further review of management comments.”

However, based on the Wired article, this is not a report about breaches or bad outcomes, it’s a story about controls and control objectives.

Spending time or money on those controls may or may not make sense. Without information about the outcomes experienced without those controls, the efficacy of the controls is a matter of opinion and conjecture.

Further, spending time or money on those controls is at odds with other things. For example, the Army might choose to spend 30 minutes training every soldier to password lock their device, or they could spend that 30 minutes on additional first aid training, or Pashtun language, or some other skill that they might, for whatever reason, want soldiers to have.

It’s well past time to stop focusing on controls for the sake of controls, and start testing our ideas. No organization can afford to implement every idea. The Army, the Pentagon IG and other agencies may have a perfect opportunity to test these controls. To not do so would be tragic.

[/update]

Breach Analysis: Data Source biases

Bob Rudis has an fascinating and important post “Once More Into The [PRC Aggregated] Breaches.” In it, he delves into the various data sources that the Privacy Rights Clearinghouse is tracking.

In doing so, he makes a strong case that data source matters, or as Obi-Wan said, “Luke, you’re going to find that many of the truths we cling to depend greatly on our own point of view:”

Breach count metatype year 530x353

I don’t want to detract from the work Bob’s done. He shows pretty clearly that human and accidental factors are exceeding technical ones as a source of incidents that reveal PII. Without detracting from that important result, I do want to add two points.

First, I reported a similar result in work released in Microsoft SIR v11, “Zeroing in on Malware Propagation Methods.” Of course, I was analyzing malware, rather than PII incidents. We need to get away from the idea that security is a purely technical problem.

Second, it’s time to extend our reporting regimes so that there’s a single source for data. The work done by non-profits like the Open Security Foundation and the Privacy Rights Clearinghouse has been awesome. But these folks are spending a massive amount of energy to collect data that ought to be available from a single source.

As we talk about mandatory breach disclosure and reporting, new laws should create and fund a single place where those reports must go. I’m not asking for additional data here (although additional data would be great). I’m asking that the reports we have now all go to one additional place, where an authoritative record will be published.

Of course, anyone who studies statistics knows that there’s often different collections, and competition between resources. You can get your aircraft accident data from the NTSB or the FAA. You can get your crime statistics from the FBI’s Unified Crime Reports or the National Crime Victimization Survey, and each has advantages and disadvantages. But each is produced because we consider the data an important part of overcoming the problem.

Many nations consider cyber-security to be an important problem, and it’s an area where new laws are being proposed all the time. These new laws really must make the data easier for more people to access.

The Fog of Reporting on Cyberwar

There’s a fascinating set of claims in Foreign Affairs “The Fog of Cyberward“:

Our research shows that although warnings about cyberwarfare have become more severe, the actual magnitude and pace of attacks do not match popular perception. Only 20 of 124 active rivals — defined as the most conflict-prone pairs of states in the system — engaged in cyberconflict between 2001 and 2011. And there were only 95 total cyberattacks among these 20 rivals. The number of observed attacks pales in comparison to other ongoing threats: a state is 600 times more likely to be the target of a terrorist attack than a cyberattack. We used a severity score ranging from five, which is minimal damage, to one, where death occurs as a direct result from cyberwarfare. Of all 95 cyberattacks in our analysis, the highest score — that of Stuxnet and Flame — was only a three.

There’s also a pretty chart:

Cyber attacks graphic 411 0

All of which distracts from what seems to me to be a fundamental methodological question, which is “what counts as an incident”, and how did the authors count those incidents? Did they use some database? Media queries? The article seems to imply that such things are trivial, and unworthy of distracting the reader. Perhaps that’s normal for Foreign Policy, but I don’t agree.

The question of what’s being measured is important for assessing if the argument is convincing. For example, it’s widely believed that the hacking of Lockheed Martin was done by China to steal military secrets. Is that a state on state attack which is included in their data? If Lockheed Martin counts as an incident, how about the hacking of RSA as a pre-cursor? There’s a second set of questions, which relates to the known unknowns, the things we know we don’t know about. As every security practitioner knows, we sweep a lot of incidents under the rug. That’s changing somewhat as state laws have forced organizations to report breaches that impact personal information. Those laws are influencing norms in the US and elsewhere, but I see no reason to believe that all incidents are being reported. If they’re not being reported, then they can’t be in the chart.

That brings us to a third question. If we treat the chart as a minimum bar, how far is it from the actual state of affairs? Again, we have no data.

I did search for underlying data, but Brandon Valeriano’s publications page doesn’t contain anything that looks relevant, and I was unable to find such a page for Ryan Maness.

Published Data Empowers

There’s a story over at Bloomberg, “Experian Customers Unsafe as Hackers Steal Credit Report Data.” And much as I enjoy picking on the credit reporting agencies, what I really want to talk about is how the story came to light.

The cyberthieves broke into an employee’s computer in September 2011 and stole the password for the bank’s online account with Experian Plc, the credit reporting agency with data on more than 740 million consumers. The intruders then downloaded credit reports on 847 people, said Dana Pardee, a branch manager at the bank. They took Social Security numbers, birthdates and detailed financial data on people across the country who had never done business with Abilene Telco, which has two locations and serves a city of 117,000.

The incident is one of 86 data breaches since 2006 that expose flaws in the way credit-reporting agencies protect their databases. Instead of directly targeting Experian, Equifax Inc. and TransUnion Corp., hackers are attacking affiliated businesses, such as banks, auto dealers and even a police department that rely on reporting agencies for background credit checks.

This approach has netted more than 17,000 credit reports taken from the agencies since 2006, according to Bloomberg.com’s examination of hundreds of pages of breach notification letters sent to victims. The incidents were outlined in correspondence from the credit bureaus to victims in six states — Maine, Maryland, New Hampshire, New Jersey, North Carolina and Vermont. The letters were discovered mostly through public-records requests by a privacy advocate who goes by the online pseudonym Dissent Doe…

There are three key lessons. The first is for those who still say “anonymized, of course.” The second is for those who are ok with naming the victims, and think we’ve mined this ore, and should move on to other things.

So the first lesson is what enabled us to learn this? Obviously, it’s work by Dissent, but it’s more than that. It’s breach disclosure laws. We don’t anonymize the breaches, we report them.

These sorts of random discoveries are only possible when breaches and their details are reported. We don’t know what details are important, and so ensuring that we get descriptions of what happened is highly important. From that, we discover new things.

The second lesson is that this hard work is being done by volunteers, working with an emergent resource. (Dissent’s post on her work is here.) There’s lots of good questions about what a breach law should be. Some proposals for 24 hour notice appear to be being drafted by people who’ve never talked to anyone who’s investigated a breach. There are interesting questions of active investigations, or those few cases where revealing information about the breach could enable attackers to hurt others. But it seems reasonably obvious that the effort put into gathering data from many services is highly inefficient. That data ought to be available in one place, so that researchers like Dissent can spend their time learning new things.

The final lesson is one that we at the New School have been talking about for a while. Public data transforms our profession and our ability to protect people. If I may borrow a line, we’re not at the beginning of the end of that process, we’re at the end of the beginning, and what comes next is going to be awesome.

Base Rate & Infosec

At SOURCE Seattle, I had the pleasure of seeing Jeff Lowder and Patrick Florer present on “The Base Rate Fallacy.” The talk was excellent, lining up the idea of the base rate fallacy, how and why it matters to infosec. What really struck me about this talk was that about a week before, I had read a presentation of the fallacy with exactly the same example in Kahneman’s “Thinking, Fast and Slow.” The problem is you have a witness who’s 80% accurate, describing a taxi as orange; what are the odds she’s right, given certain facts about the distribution of taxis in the city?

I had just read the discussion. I recognized the problem. I recognized that the numbers were the same. I recalled the answer. I couldn’t remember how to derive it, and got the damn thing wrong.

Well played, sirs! Game to Jeff and Patrick.

Beyond that, there’s an important general lesson in the talk. It’s easy to make mistakes. Even experts, primed for the problems, fall into traps and make mistakes. If we publish only our analysis (or worse, engage in information sharing), then others can’t see what mistakes we might have made along the way.

This problem is exacerbated in a great deal of work by a lack of a methodology section, or a lack of clear definitions.

The more we publish, the more people can catch one anothers errors, and the more the field can advance.