You say noise, I say data

There is a frequent claim that stock markets are somehow irrational and unable to properly value the impact of cyber incidents in pricing. (That’s not usually precisely how people phrase it. I like this chart of one of the largest credit card breaches in history:

Target Stock

It provides useful context as we consider this quote:

On the other hand, frequent disclosure of insignificant cyberincidents could overwhelm investors and harm a company’s stock price, said Eric Cernak, cyberpractice leader at the U.S. division of German insurer Munich Re. “If every time there’s unauthorized access, you’re filing that with the SEC, there’s going to be a lot of noise,” he said.
(Corporate Judgment Call: When to Disclose You’ve Been Hacked, Tatyana Shumsky, WSJ)

Now, perhaps Mr. Cernak’s words been taken out of context. After all, it’s a single sentence in a long article, and the lead-in, which is a paraphrase, may confuse the issue.

I am surprised that an insurer would be opposed to having more data from which they can try to tease out causative factors.

Image from The Langner group. I do wish it showed the S&P 500.

Why Don't We Have an Incident Repository?

Steve Bellovin and I provided some “Input to the Commission on Enhancing National Cybersecurity.” It opens:

We are writing after 25 years of calls for a “NTSB for Security” have failed to result in action. As early as 1991, a National Research Council report called for “build[ing] a repository of incident data” and said “one possible model for data collection is the incident reporting system administered by the National Transportation Safety Board.” [1] The calls for more data about incidents have continued, including by us [2, 3].

The lack of a repository of incident data impacts our ability to answer or assess many of your questions, and our key recommendation is that the failure to establish such a repository is, in and of itself, worthy of study. There are many factors in the realm of folklore as to why we do not have a repository, but no rigorous answer. Thus, our answer to your question 4 (“What can or should be done now or within the next 1-2 years to better address the challenges?”) is to study what factors have inhibited the creation of a repository of incident data, and our answer to question 5 (“what should be done over a decade?”) is to establish one. Commercial air travel is so incredibly safe today precisely because of decades of accident investigations, investigations that have helped plane manufacturers, airlines, and pilots learn from previous failures.

FBI says their warnings were ignored

There’s two major parts to the DNC/FBI/Russia story. The first part is the really fascinating evolution of public disclosures over the DNC hack. We know the DNC was hacked, that someone gave a set of emails to Wikileaks. There are accusations that it was Russia, and then someone leaked an NSA toolkit and threatened to leak more. (See Nick Weaver’s “NSA and the No Good, Very Bad Monday,” and Ellen Nakishima’s “Powerful NSA hacking tools have been revealed online,” where several NSA folks confirm that the tool dump is real. See also Snowden’s comments “on Twitter:” “What’s new? NSA malware staging servers getting hacked by a rival is not new. A rival publicly demonstrating they have done so is.”) That’s not the part I want to talk about.

The second part is what the FBI knew, how they knew it, who they told, and how. In particular, I want to look at the claims in “FBI took months to warn Democrats[…]” at Reuters:

In its initial contact with the DNC last fall, the FBI instructed DNC personnel to look for signs of unusual activity on the group’s computer network, one person familiar with the matter said. DNC staff examined their logs and files without finding anything suspicious, that person said.

When DNC staffers requested further information from the FBI to help them track the incursion, they said the agency declined to provide it.
[…]
“There is a fine line between warning people or companies or even other government agencies that they’re being hacked – especially if the intrusions are ongoing – and protecting intelligence operations that concern national security,” said the official, who spoke on condition of anonymity.

Let me repeat that: the FBI had evidence that the DNC was being hacked by the Russians, and they said “look around for ‘unusual activity.'”

Shockingly, their warning did not enable the DNC to find anything.

When Rob Reeder, Ellen Cram Kowalczyk and I did work on usability of warnings, we recommended they be explanatory, actionable and tested. This warning fails on all those counts.

There may be a line, or really, a balancing act, around disclosing what the FBI knows, and ensuring that how they know it is protected. (I’m going to treat the FBI as the assigned mouthpiece, and move to discussing the US government as a whole, because otherwise we may rat hole on authorities, US vs non-US activity, etc, which are a distraction). Fundamentally, we can create a simple model of how the US government learns about these hacks:

  • Network monitoring
  • Kill chain-driven forensics
  • Agents working at the attacker
  • “Fifth party take” where they’ve broken into a spy server and are reading what those spies take.*

*This “fifth party take”, to use the NSA’s jargon, is what makes the NSA server takeover so interesting and relevant. Is the release of the NSA files a comment that the GRU knows that the NSA knows about their hack because the GRU has owned additional operational servers?)

Now, we can ask, if the FBI says “look for connections to Twitter when there’s no one logged into Alice’s computer,” does it allow the attacker to distinguish between those three methods?

No.

Now, it does disclose that that C&C pathway is known, and if the attacker has multiple paths, then it might be interesting to know that only one was detected. But there’s another tradeoff, which is that as long as the penetration is active, the US government can continue to find indicators, and use them to find other break-ins. That’s undeniably useful to the FBI, at the cost of the legitimacy of our electoral processes. That’s a bad tradeoff.

We have to think about and discuss priorities and tradeoffs. We need to talk about the policy which the FBI is implementing, which seems to be to provide un-actionable, useless warnings. Perhaps that’s sufficient in some eyes.

We are not having a policy discussion about these tradeoffs, and that’s a shame.

Here are some questions that we can think about:

  • Is the model presented above of how attacks are detected reasonable?
  • Is there anything classified which changes the general debate? (No, we learned that from the CRISIS report.)
  • What should a government warning include? A single IOC? Some fraction in a range (say 25-35%)? All known IOCs? (Using a range is interesting because it reduces information leakage back to an attacker who’s compromised a source.)
  • How do we get IOCs to be bulk declassified so they can be used at organizations whose IT staff do not have clearances, cannot get clearances rapidly, and post-OPM ain’t likely to?

That’s a start. What other questions should we be asking so we can move from “Congressional leaders were briefed a year ago on hacking of Democrats” to “hackers were rebuffed from interfering in our elections” or, “hackers don’t even bother trying to attack election?”

[Update: In “AS FBI WARNS ELECTION SITES GOT HACKED, ALL EYES ARE ON RUSSIA“, Wired links to an FBI Flash, which has an explicit set of indicators, including IPs and httpd log entries, along with explicit recommendations such as “Search logs for commands often passed during SQL injection.” This is far more detail than was in these documents a few years ago, and far more detail than I expected when I wrote the above.]

The New Cyber Agency Will Likely Cyber Fail

The Washington Post reports that there will be a “New agency to sniff out threats in cyberspace.” This is my first analysis of what’s been made public.

Details are not fully released, but there are some obvious problems, which include:

  • “The quality of the threat analysis will depend on a steady stream of data from the private sector” which continues to not want to send data to the Feds.
  • The agency is based in the Office of the Director of National Intelligence. The world outside the US is concerned that the US spies on them, which means that the new center will get minimal cooperation from any company which does business outside the US.
  • There will be privacy concerns about US citizen information, much like there was with the NCTC. For example, here.
  • The agency is modeled on the National Counter Terrorism Center. See “Law Enforcement Agencies Lack Directives to Assist Foreign Nations to Identify, Disrupt, and Prosecute Terrorists (2007)“. A new agency certainly has upwards of three years to get rolling, because that will totally help.
  • The President continues to ask the wrong questions of the wrong people. (“President Obama wanted to know the details. What was the impact? Who was behind it? Monaco called meetings of the key agencies involved in the investigation, including the FBI, the NSA and the CIA.” But not the private sector investigators who were analyzing the hard drives and the logs?)

It’s all well and good to stab, but perhaps more useful would be some positive contributions. I have been doing my best to make those contributions.

I sent a letter to the Data.gov folks back in 2009, asking for more transparency. Similarly, I sent an open letter to the new cyber-czar.

The suggestions there have not been acted apon. Rather than re-iterate them, I believe there are human reasons why that’s the case, and so in 2013, asked the Royal Society to look into reasons that calls for an NTSB-like function have failed as part of their research vision for the UK.

Cyber continues to suck. Maybe it’s time to try openness, rather than a new secret agency secretly doing analysis of who’s behind the attacks, rather than why they succeed, or why our defenses aren’t working. If we can’t get to openness, and apparently we cannot, we should look at the reasons why. We should inventory them, including shame, liability fears, customers fleeing and assess their accuracy and predictive value. We should invest in a research program that helps us understand them and address them so we can get to a proper investigative approach to why cyber is failing, and only then will we be able to do anything about it.

Until then, keep moving those deck chairs.

Security 101: Show Your List!

Lately I’ve noted a lot of people quoted in the media after breaches saying “X was Security 101. I can’t believe they didn’t do X!” For example, “I can’t believe that LinkedIn wasn’t salting passwords! That’s security 101!”

Now, I’m unsure if that’s “security 101” or not. I think security 101 for passwords is “don’t store them in plaintext”, or “don’t store them with a crypto algorithm you designed”. Ten years ago, it would have included salting, but with the speed of GPU crackers, maybe it doesn’t anymore. A good library would probably still include it. Maybe LinkedIn was spending more on preventing XSS or SQL injection, and that pushed password storage off their list. Maybe that’s right, maybe it’s wrong. To tell you the truth, I don’t want to argue about it.

What I want to argue about is the backwards looking nature of these statements. I want to argue because I did some searching, and not one of those folks I searched for has committed to a list of security 101, or what are the “simple controls” every business should have.

This is important because otherwise, hindsight is 20/20. It’s easy to say in hindsight that an organization should have done A or B or C. It’s harder to offer up a complete list in advance, and harder yet to justify the budget required to deploy and operate it.

So I’m going to make three requests for 2015:

  • If you’re an expert (or even play one on the internet), and if you want to say “X is Security 101,” please publish your full list of what’s “101.”
  • If you’re a reporter and someone tells you “X is security 101” please ask them for their list.
  • Finally, if you’re someone who wants to see security improve, and you hear claims about “101”, please ask for the list.

Oh, and since it’s sauce for the gander, here’s my list for individuals:

  • Stay up to date–get most of your machines on the latest revisions of software and get patches for security issues installed, especially in your browser and AV software.
  • Use a firewall that blocks most inbound traffic.
  • Ensure you have a working backup of your data.

(There are complex arguments about AV software, and a lack of agreements about how to effectively test it. Do you need it? Will it block the wildlist? There’s nuance, but that nuance doesn’t play into a 101 list. I won’t be telling people not to use AV software.)

*By “lately,” I meant in 2012, when I wrote this, right after the Linkedin breach. But I recently discovered that I hadn’t posted.

[Update: I’m happy to see Ira Winkler and Araceli Treu Gomes took up the idea in “The Irari rules for declaring a cyberattack ‘sophisticated’.” Good for them!]

Employees Say Company Left Data Vulnerable

There’s a recurring theme in data breach stories:

The risks were clear to computer experts inside $organization: The organization, they warned for years, might be easy prey for hackers.

But despite alarms as far back as 2008, $organization was slow to raise its defenses, according to former employees.

The particular quote is from “Ex-Employees Say Home Depot Left Data Vulnerable,” but you can find similar statements about healthcare.gov, Target, and most other breaches. It’s worth taking apart these claims a little bit, and asking what we can do about them.

This is a longish blog post, so the summary is: these claims are true, irrelevant, and a distraction from engineering more secure organizations.

I told you so?

First, these claims are true. Doubtless, in every organization of any size, there were people advocating all sorts of improvements, not all of which were funded. Employees who weren’t successful at driving effective change complain that “when they sought new software and training, managers came back with the same response: ‘We sell hammers.’” The “I told you so” isn’t limited to employees, there’s a long list of experts who are willing to wax philisophic about the mote in their neighbors eyes. This often comes in the form of “of course you should have done X.” For example, the Home Depot article includes a quote from Gartner, “Scanning is the easiest part of compliance…There are a lot of services that do this. They hardly cost any money.” I’ll get to that claim later in this article. First, let’s consider the budget items actually enumerated in the article.

Potential Spending

In the New York Times article on Home Depot, I see at least four programs listed as if they’re trivial, cheap, and would have prevented the breach:

  1. Anti-virus
  2. Threat intelligence
  3. Continuous (network) anomaly detection
  4. Vulnerability scanning

Let’s discuss each in turn.

(1) The claims that even modern, updated anti-virus is trivially bypassed by malware employed by criminals are so common I’m not going to look for a link.

(2) Threat intelligence (and “sharing”) usually means a feed of “observables” or “indicators of compromise.” These usually include hashes of files dropped by intruders, IP addresses and domain names for the “command and control” servers or fake emails which are sent, either containing an exploit, a trojan horse, or a phishing link. This can be useful if your attackers don’t bother to change such things between attacks. The current state of these feeds and their use is such that many attackers don’t really bother to make such changes.
(See also my previous comments on “Don’t share, publish:” we spend so much time on rules for sharing that we don’t share.) However, before saying “everyone should sign up for such services,” “they’ll be a silver bullet,” we should consider what the attackers will do, which is to buy the polymorphism services that more common malware has been using for years. So it is unlikely that threat intelligence would prevent this breach.

(3) Continuous anomaly detection generally only works if you have excellent change management processes, careful network segmentation, and a mostly static business environment. In almost any real network, the level of alarms from such systems are high, and the value of the alarms, incredibly low. (This is a result of the organizations making the systems not wanting to be accused of negligence because their system didn’t “sound the alarm,” and so they alarm on everything.) Most organizations who field such things end up ignoring the alarms, dropping the maintenance contracts, and leaving the systems in place to satisfy the PCI auditors.

(4) Vulnerability scanning may be cheap, but like anomaly detectors, they are motivated to “sound the alarm” on everything. Most alarms are not push-button remediation. Even if that feature is offered, there’s a need to test the remediation to see if it breaks anything, to queue it in the aforementioned change management, and to work across some team boundary so the operations team takes action. None of which falls under the rhetoric of “hardly cost any money.”

The Key Question: How to do better?

Any organization exists to deliver something, and usually that something is not cyber security. In Home Depot’s case, they exist to deliver building supplies at low cost. So how should Home Depot’s management make decisions about where to invest?

Security decisions are like a lot of other decisions in business. There’s insufficient information, people who may be acting deceitful, and the stakes are high. But unlike a lot of other decisions, figuring out if you made the right one is hard. Managers make a lot of decisions, and the relationship between those decisions and the security outcomes is hard to understand.

The trouble is, in security, we like to hide our decisions, and hide the outcomes of our decisions. As a result, we never learn. And employees keep saying “I told you so” about controls that may or may not help. As I said at BSides Las Vegas, companies are already paying the PR price, they need to move to talking about what happened.

With that information, we can do better at evaluating controls, and moving from opinions about what works (with the attendant “I told you so”) to evidence about effective investments in security.

BSides LV: Change Industry Or Change Professionals?

All through the week of BSides/BlackHat/Defcon, people came up to me to tell me that they enjoyed my BSides Las Vegas talk. (Slides, video). It got some press coverage, including an article by Jon Evans of TechCrunch, “Notes From Crazytown, Day One: The Business Of Fear.” Mr. Evans raises an interesting point: “the computer security industry is not in the business of openness, it is in the business of fear.” I’ve learned to be happy when people surface reasons that what I’m saying won’t work, because it gives us an opportunity to consider and overcome those objections.

At one level, Mr Evans is correct. Much of the computer security industry is in the business of fear. There are lots of incentives for the industry, many of which take us in wrong directions. (Mr. Evans acknowledges the role of the press in this; I appreciate his forthrightness, and don’t know what to do about that aspect of the problem, beyond pointing out that “company breached” is on its unfortunate way to being the new “dog bites man.”)

But I wasn’t actually asking the industry to change. I was asking professionals to change. And while that may appear to be splitting hairs, there’s an important reason that I ask people to consider issues of efficacy and burnout. That reason is I think we can incentivize people to act in their own long term interest. It’s challenging, and that’s why, after talking to behavior change specialists, I chose to use a set of commitment devices to get people to commit to pushing organizations to disclose more.

I want to be super-clear on my request, because based on feedback, not everyone understood my request. I do not expect everyone who tries to succeed. [#include Yoda joke.] All I’m asking people to do is to push their organizations to do better. To share root cause analyses. Because if we do that, we will make things better.

It’s not about the industry, it’s about the participants. It’s about you and me. And we can do better, and in doing so, we can make things better.

What Security Folks Can Learn from Doctors

Stefan Larson talks about “What doctors can learn from each other:”

Different hospitals produce different results on different procedures. Only, patients don’t know that data, making choosing a surgeon a high-stakes guessing game. Stefan Larsson looks at what happens when doctors measure and share their outcomes on hip replacement surgery, for example, to see which techniques are proving the most effective. Could health care get better — and cheaper — if doctors learn from each other in a continuous feedback loop? (Filmed at TED@BCG.)

Measuring and sharing outcomes of procedures? I’m sure our anti-virus software makes that unnecessary.

But you should watch the talk anyway — maybe someday you’ll need a new hip, and you’ll want to be able to confidently question the doctors draining you of evil humors.

Security Lessons From Star Wars: Breach Response

To celebrate Star Wars Day, I want to talk about the central information security failure that drives Episode IV: the theft of the plans.

First, we’re talking about really persistent threats. Not like this persistence, but the “many Bothans died to bring us this information” sort of persistence. Until members of Comment Crew are going missing, we need a term more like ‘pesky’ to help us keep perspective.

Kellman Meghu has pointed out that once the breach was detected, the Empire got off to a good start on responding to it. They were discussing risk before they descend into bickering over ancient religions.

But there’s another failure which happens, which is that knowledge of the breach apparently never leaves that room, and there’s no organized activity to consider questions such as:

  • Can we have a red team analyze the plans for problems? This would be easy to do with a small group.
  • Should we re-analyze our threat model for this Death Star?
  • Is anyone relying on obscurity for protection? This would require letting the engineering organization know about the issue, and asking people to step forward if the plans being stolen impacts security. (Of course, we all know that the Empire is often intolerant, and there might be a need for an anonymous drop box.)

If the problem hadn’t been so tightly held, the Empire might not have gotten here:

Tarkin bast

General Bast: We’ve analyzed their attack, sir, and there is a danger. Should I have your ship standing by?

Grand Moff Tarkin: Evacuate? In our moment of triumph? I think you overestimate their chances.

There are a number of things that might have been done had the Empire known about the weakly shielded exhaust port. For example, they might have welded some steel beams across that trench. They might put some steel plating up near the exhaust port. They might land a Tie Fighter in the trench. The could deploy some storm troopers with those tripod mounted guns that never quite seem to hit the Millenium Falcon. Maybe it’s easier in a trench. I’m not sure.

What I am sure of is there’s all sorts of responses, and all of them depend on information leaving the hands of those six busy executives. The information being held too closely magnified the effect of those Bothan spies.

So this May the Fourth, ask yourself: is there information that you could share more widely to help defend your empire?

Exploit Kit Statistics

On a fairly regular basis, I come across pages like this one from SANS, which contain fascinating information taken from exploit kit control panels:

Exploit Kit Control panel

There’s all sorts of interesting numbers in that picture. For example, the success rate for owning XP machines (19.61%) is three times that of Windows 7. (As an aside, the XP number is perhaps lower than “common wisdom” in the security community would have it.) There are also numbers for the success rates of exploits, ranging from Java OBE at 35% down to MDAC at 1.85%.

That’s not the only captured control panel. There’s more, for example, M86, Spider Labs and webroot.

I’m fascinated by these numbers, and have two questions:

  • Is anyone capturing the statistics shown and running statistics over time?
  • Is there an aggregation of all these captures? If not, what are the best search terms to find them?