Incentives, Insurance and Root Cause

Over the decade or so since The New School book came out, there’s been a sea change in how we talk about breaches, and how we talk about those who got breached. We agree that understanding what’s going wrong should be a bigger part of how we learn. I’m pleased to have played some part in that movement.

As I consider where we are today, a question that we can’t answer sufficiently is “what’s in it for me?” “Why should I spend time on this?” The benefits may take too long to appear. And so we should ask what we could do about that. In that context, I am very excited to see a proposal from Rob Knake on “Creating a Federally Sponsored Cyber Insurance Program.”

He suggests that a full root cause analysis would be a condition of Federal insurance backstop:

The federally backstopped cyber insurance program should mandate that companies allow full breach investigations, which include on-site gathering of data on why the attack succeeded, to help other companies prevent similar attacks. This function would be similar to that performed by the National Transportation Safety Board (NTSB) for aviation incidents. When an incident occurs, the NTSB establishes the facts of the incident and makes recommendations to prevent similar incidents from occurring. Although regulators typically establish new requirements upon the basis of NTSB recommendations, most air carriers implement recommendations on a voluntary basis. Such a virtuous cycle could happen in cybersecurity if companies covered by a federal cyber insurance program had their incidents investigated by a new NTSB-like entity, which could be run by the private sector and funded by insurance companies.

The Breach Response Market Is Broken (and what could be done)

Much of what Andrew and I wrote about in the New School has come to pass. Disclosing breaches is no longer as scary, nor as shocking, as it was. But one thing we expected to happen was the emergence of a robust market of services for breach victims. That’s not happened, and I’ve been thinking about why that is, and what we might do about it.

I submitted a short (1 1/2 page) comment for the FTC’s PrivacyCon, and the FTC has published that here.

[Update Oct 19: I wrote a blog post for IANS, “After the Breach: Making Your Response Count“]

[Update Nov 21: the folks at Abine decided to run a survey, and asked 500 people what they’d like to see a breach notice letter. Their blog post.]

FBI says their warnings were ignored

There’s two major parts to the DNC/FBI/Russia story. The first part is the really fascinating evolution of public disclosures over the DNC hack. We know the DNC was hacked, that someone gave a set of emails to Wikileaks. There are accusations that it was Russia, and then someone leaked an NSA toolkit and threatened to leak more. (See Nick Weaver’s “NSA and the No Good, Very Bad Monday,” and Ellen Nakishima’s “Powerful NSA hacking tools have been revealed online,” where several NSA folks confirm that the tool dump is real. See also Snowden’s comments “on Twitter:” “What’s new? NSA malware staging servers getting hacked by a rival is not new. A rival publicly demonstrating they have done so is.”) That’s not the part I want to talk about.

The second part is what the FBI knew, how they knew it, who they told, and how. In particular, I want to look at the claims in “FBI took months to warn Democrats[…]” at Reuters:

In its initial contact with the DNC last fall, the FBI instructed DNC personnel to look for signs of unusual activity on the group’s computer network, one person familiar with the matter said. DNC staff examined their logs and files without finding anything suspicious, that person said.

When DNC staffers requested further information from the FBI to help them track the incursion, they said the agency declined to provide it.
[…]
“There is a fine line between warning people or companies or even other government agencies that they’re being hacked – especially if the intrusions are ongoing – and protecting intelligence operations that concern national security,” said the official, who spoke on condition of anonymity.

Let me repeat that: the FBI had evidence that the DNC was being hacked by the Russians, and they said “look around for ‘unusual activity.'”

Shockingly, their warning did not enable the DNC to find anything.

When Rob Reeder, Ellen Cram Kowalczyk and I did work on usability of warnings, we recommended they be explanatory, actionable and tested. This warning fails on all those counts.

There may be a line, or really, a balancing act, around disclosing what the FBI knows, and ensuring that how they know it is protected. (I’m going to treat the FBI as the assigned mouthpiece, and move to discussing the US government as a whole, because otherwise we may rat hole on authorities, US vs non-US activity, etc, which are a distraction). Fundamentally, we can create a simple model of how the US government learns about these hacks:

  • Network monitoring
  • Kill chain-driven forensics
  • Agents working at the attacker
  • “Fifth party take” where they’ve broken into a spy server and are reading what those spies take.*

*This “fifth party take”, to use the NSA’s jargon, is what makes the NSA server takeover so interesting and relevant. Is the release of the NSA files a comment that the GRU knows that the NSA knows about their hack because the GRU has owned additional operational servers?)

Now, we can ask, if the FBI says “look for connections to Twitter when there’s no one logged into Alice’s computer,” does it allow the attacker to distinguish between those three methods?

No.

Now, it does disclose that that C&C pathway is known, and if the attacker has multiple paths, then it might be interesting to know that only one was detected. But there’s another tradeoff, which is that as long as the penetration is active, the US government can continue to find indicators, and use them to find other break-ins. That’s undeniably useful to the FBI, at the cost of the legitimacy of our electoral processes. That’s a bad tradeoff.

We have to think about and discuss priorities and tradeoffs. We need to talk about the policy which the FBI is implementing, which seems to be to provide un-actionable, useless warnings. Perhaps that’s sufficient in some eyes.

We are not having a policy discussion about these tradeoffs, and that’s a shame.

Here are some questions that we can think about:

  • Is the model presented above of how attacks are detected reasonable?
  • Is there anything classified which changes the general debate? (No, we learned that from the CRISIS report.)
  • What should a government warning include? A single IOC? Some fraction in a range (say 25-35%)? All known IOCs? (Using a range is interesting because it reduces information leakage back to an attacker who’s compromised a source.)
  • How do we get IOCs to be bulk declassified so they can be used at organizations whose IT staff do not have clearances, cannot get clearances rapidly, and post-OPM ain’t likely to?

That’s a start. What other questions should we be asking so we can move from “Congressional leaders were briefed a year ago on hacking of Democrats” to “hackers were rebuffed from interfering in our elections” or, “hackers don’t even bother trying to attack election?”

[Update: In “AS FBI WARNS ELECTION SITES GOT HACKED, ALL EYES ARE ON RUSSIA“, Wired links to an FBI Flash, which has an explicit set of indicators, including IPs and httpd log entries, along with explicit recommendations such as “Search logs for commands often passed during SQL injection.” This is far more detail than was in these documents a few years ago, and far more detail than I expected when I wrote the above.]

Dear Mr. President

U.S. President Barack Obama says he’s ”concerned” about the country’s cyber security and adds, ”we have to learn from our mistakes.”

Dear Mr. President, what actions are we taking to learn from our mistakes? Do we have a repository of mistakes that have been made? Do we have a “capability” for analysis of these mistakes? Do we have a program where security experts can gain access to the repository, to learn from it?

I’ve written extensively on this problem, here on this blog, and in the book from which it takes its name. We do not have a repository of mistakes. We do not have a way to learn from those mistakes.

I’ve got to wonder why that is, and what the President thinks we’re doing to learn from our mistakes. I know he has other things on his mind, and I hope that our officials who can advise him directly take this opportunity to say “Mr. President, we do not learn from our mistakes.”

(Thanks to Chris Wysopal for the pointer to the comment.)

The New Cyber Agency Will Likely Cyber Fail

The Washington Post reports that there will be a “New agency to sniff out threats in cyberspace.” This is my first analysis of what’s been made public.

Details are not fully released, but there are some obvious problems, which include:

  • “The quality of the threat analysis will depend on a steady stream of data from the private sector” which continues to not want to send data to the Feds.
  • The agency is based in the Office of the Director of National Intelligence. The world outside the US is concerned that the US spies on them, which means that the new center will get minimal cooperation from any company which does business outside the US.
  • There will be privacy concerns about US citizen information, much like there was with the NCTC. For example, here.
  • The agency is modeled on the National Counter Terrorism Center. See “Law Enforcement Agencies Lack Directives to Assist Foreign Nations to Identify, Disrupt, and Prosecute Terrorists (2007)“. A new agency certainly has upwards of three years to get rolling, because that will totally help.
  • The President continues to ask the wrong questions of the wrong people. (“President Obama wanted to know the details. What was the impact? Who was behind it? Monaco called meetings of the key agencies involved in the investigation, including the FBI, the NSA and the CIA.” But not the private sector investigators who were analyzing the hard drives and the logs?)

It’s all well and good to stab, but perhaps more useful would be some positive contributions. I have been doing my best to make those contributions.

I sent a letter to the Data.gov folks back in 2009, asking for more transparency. Similarly, I sent an open letter to the new cyber-czar.

The suggestions there have not been acted apon. Rather than re-iterate them, I believe there are human reasons why that’s the case, and so in 2013, asked the Royal Society to look into reasons that calls for an NTSB-like function have failed as part of their research vision for the UK.

Cyber continues to suck. Maybe it’s time to try openness, rather than a new secret agency secretly doing analysis of who’s behind the attacks, rather than why they succeed, or why our defenses aren’t working. If we can’t get to openness, and apparently we cannot, we should look at the reasons why. We should inventory them, including shame, liability fears, customers fleeing and assess their accuracy and predictive value. We should invest in a research program that helps us understand them and address them so we can get to a proper investigative approach to why cyber is failing, and only then will we be able to do anything about it.

Until then, keep moving those deck chairs.

South Carolina

It’s easy to feel sympathy for the many folks impacted by the hacking of South Carolina’s Department of Revenue. With 3.6 million taxpayer social security numbers stolen, those people are the biggest victims, and I’ll come back to them. It’s also easy to feel sympathy for the folks in IT and IT management, all the way up to the Governor. The folks in IT made a call to use Trustwave for PCI monitoring, because Trustwave offered PCI compliance. They also made the call to not use a second monitoring system. That decision may look easy to criticize, but I think it’s understandable. Having two monitoring systems means more than doubling staff workloads in responding. (You have to investigate everything, and then you have to correlate and understand discrepancies.)

At the same time, I think it’s possible to take important lessons from what we do know. Each of these is designed to be a testable claim.

Compliance doesn’t prevent hacking.

In his September letter to Haley, [State Inspector General] Maley concluded that while the systems of cabinet agencies he had finished examining could be tweaked and there was a need for a statewide uniform security policy, the agencies were basically sound and the Revenue Department’s system was the “best” among them. (“Foreign hacker steals 3.6 million Social Security numbers from state Department of Revenue“, Tim Smith, Greenville Online)

I believe the reason that compliance doesn’t prevent hacking is because the compliance systems are developed without knowledge of what really goes wrong. That is, they lack feedback loops. They lack testability. They lack any mechanism for ensuring that effort has payoff. (My favorite example is password expiration times. Precisely how much more secure are you with a 60 day expiration policy versus a 120 day policy? Is such a policy worth doubling staff effort?)

You don’t know how your compliance program differs from the SC DoR.

I’m willing to bet that 90% of my readers do not know what exactly the SC DoR did to protect their systems. You might know that it was PCI (and Trustwave as a vendor). But do you know all the details? If you don’t know the details, how can you assess if your program is equivalent, better, or worse? If you can’t do that, can you sleep soundly?

But actually, that’s a red herring. Since compliance programs often contain a bunch of wasted effort, knowing how yours lines up to theirs is less relevant than you’d think. Maybe you’re slacking on something they put a lot of time into. Good for you! Or not, maybe that was the thing that would have stopped the attacker, if only they’d done it a little more. Comparing one to one is a lot less interesting than comparing to a larger data set.

We don’t know what happened in South Carolina

Michael Hicks, the director of the Maryland Cybersecurity Center at the University of Maryland, said states needed a clearer understanding of the attack in South Carolina.

“The only way states can raise the level of vigilance,” Mr. Hicks said, “is if they really get to the bottom of what really happened in this attack.” (“
Hacking of Tax Records Has Put States on Guard
“, Robbie Brown, New York Times.)

Mr. Hicks gets a New School hammer, for nailing that one.

Lastly, I’d like to talk about the first victims. The 3.6 million taxpayers. That’s 77% of the 4.6 million people in the state. That would reasonably be the entire taxpaying population. We don’t know how much data was actually leaked. (What we know is a floor. Was it entire tax returns? Was it all the information that banks report? How much of it was shared data from the IRS?) We know that these victims are at long term risk and have only short term protection. We know that their SSNs are out there, and I haven’t heard that the Social Security Administration is offering them new ones. There’s a real, and under-discussed difference between SSN breaches and credit card breaches. Let’s not even talk about biometric breaches here.

At the end of the day, there’s a lot of victims of this breach. And while it’s easy to point fingers at the IT folks responsible, I’m starting to wonder if perhaps we’re all responsible. To the extent that few of us answer Mr. Hick’s question, to the extent that we don’t learn from one anothers’ mistakes, don’t we all make defending our systems harder? We should learn what went wrong, and we should learn that not talking about the root causes helps things go wrong in the future.

Without detracting from the crime that happened in South Carolina, there’s a bigger crime if we don’t learn from it.

“Update”: We now know a fair amount
The above was written and accidentally not posted a few weeks ago. I’d like to offer up my thanks to the decision makers in South Carolina for approving Mandiant’s release of a public and technically detailed version of their report, which is short and fascinating. I’d also like to thank the folks at Mandiant for writing in clear, understandable language about what happened. Nicely done, folks!

The Evolution of Information Security

A little while back, a colleague at the NSA reached out to me for an article for their “Next Wave” journal, with a special topic of the science of information security. I’m pleased with the way the article and the entire issue came out, and so I’m glad that the NSA has decided to release it.

The core of the article how to evaluate the investments we make in security, today and at low cost, if only we choose to take advantage of it.

The entire article is available here: The Next Wave: Security Science and I’m happy to be able to make my article available as a separate (high quality) PDF: “The Evolution of Information Security

Breach Notification in France

Over at the Proskauer blog, Cecile Martin writes “Is data breach notification compulsory under French law?

On May 28th, the Commission nationale de l’informatique et des libertés (“CNIL”), the French authority responsible for data privacy, published guidance on breach notification law affecting electronic communications service providers. The guidance was issued with reference to European Directive 2002/58/EC, the e-Privacy Directive, which imposes specific breach notification requirements on electronic communication service providers.


In France, all data breaches that affect electronic communication service providers need to be reported [to CNIL], regardless of the severity. Once there is a data breach, service providers must immediately send written notification to CNIL, stating the following…

This creates a fascinating data set at CNIL. I hope that they’ll operate with a spirit of transparency, and produce in depth analysis of the causes of breaches and the efficacy of the defensive measures that companies employ.

Big Brother Watch report on breaches

Over at the Office of Inadequate Security, Dissent says everything you need to know about a new report from the UK’s Big Brother Watch:

Extrapolating from what we have seen in this country, what the ICO learns about is clearly only the tip of the iceberg there. I view the numbers in the BBW report as a significant underestimate of the number of breaches that actually occurred because not only are we not hearing from 9% of entities, but many authorities that did report probably did not detect or learn of all of the breaches they actually experienced. BBC notes, “For example, it does seem surprising that in 263 local authorities, not even a single mobile phone or memory stick was lost.” “Surprising” is a very diplomatic word. (“What They Didn’t Know: Big Brother Watch report on breaches highlights why we need mandatory disclosure“)

Nate Silver in the NYT: A Bayesian Look at Assange

From The Fine Article:

Under these circumstances, then, it becomes more likely that the charges are indeed weak (or false) ones made to seem as though they are strong. Conversely, if there were no political motivation, then the merits of the charges would be more closely related to authorities’ zealousness in pursing them, and we could take them more at face value.