Employees Say Company Left Data Vulnerable

There’s a recurring theme in data breach stories:

The risks were clear to computer experts inside $organization: The organization, they warned for years, might be easy prey for hackers.

But despite alarms as far back as 2008, $organization was slow to raise its defenses, according to former employees.

The particular quote is from “Ex-Employees Say Home Depot Left Data Vulnerable,” but you can find similar statements about healthcare.gov, Target, and most other breaches. It’s worth taking apart these claims a little bit, and asking what we can do about them.

This is a longish blog post, so the summary is: these claims are true, irrelevant, and a distraction from engineering more secure organizations.

I told you so?

First, these claims are true. Doubtless, in every organization of any size, there were people advocating all sorts of improvements, not all of which were funded. Employees who weren’t successful at driving effective change complain that “when they sought new software and training, managers came back with the same response: ‘We sell hammers.’” The “I told you so” isn’t limited to employees, there’s a long list of experts who are willing to wax philisophic about the mote in their neighbors eyes. This often comes in the form of “of course you should have done X.” For example, the Home Depot article includes a quote from Gartner, “Scanning is the easiest part of compliance…There are a lot of services that do this. They hardly cost any money.” I’ll get to that claim later in this article. First, let’s consider the budget items actually enumerated in the article.

Potential Spending

In the New York Times article on Home Depot, I see at least four programs listed as if they’re trivial, cheap, and would have prevented the breach:

  1. Anti-virus
  2. Threat intelligence
  3. Continuous (network) anomaly detection
  4. Vulnerability scanning

Let’s discuss each in turn.

(1) The claims that even modern, updated anti-virus is trivially bypassed by malware employed by criminals are so common I’m not going to look for a link.

(2) Threat intelligence (and “sharing”) usually means a feed of “observables” or “indicators of compromise.” These usually include hashes of files dropped by intruders, IP addresses and domain names for the “command and control” servers or fake emails which are sent, either containing an exploit, a trojan horse, or a phishing link. This can be useful if your attackers don’t bother to change such things between attacks. The current state of these feeds and their use is such that many attackers don’t really bother to make such changes.
(See also my previous comments on “Don’t share, publish:” we spend so much time on rules for sharing that we don’t share.) However, before saying “everyone should sign up for such services,” “they’ll be a silver bullet,” we should consider what the attackers will do, which is to buy the polymorphism services that more common malware has been using for years. So it is unlikely that threat intelligence would prevent this breach.

(3) Continuous anomaly detection generally only works if you have excellent change management processes, careful network segmentation, and a mostly static business environment. In almost any real network, the level of alarms from such systems are high, and the value of the alarms, incredibly low. (This is a result of the organizations making the systems not wanting to be accused of negligence because their system didn’t “sound the alarm,” and so they alarm on everything.) Most organizations who field such things end up ignoring the alarms, dropping the maintenance contracts, and leaving the systems in place to satisfy the PCI auditors.

(4) Vulnerability scanning may be cheap, but like anomaly detectors, they are motivated to “sound the alarm” on everything. Most alarms are not push-button remediation. Even if that feature is offered, there’s a need to test the remediation to see if it breaks anything, to queue it in the aforementioned change management, and to work across some team boundary so the operations team takes action. None of which falls under the rhetoric of “hardly cost any money.”

The Key Question: How to do better?

Any organization exists to deliver something, and usually that something is not cyber security. In Home Depot’s case, they exist to deliver building supplies at low cost. So how should Home Depot’s management make decisions about where to invest?

Security decisions are like a lot of other decisions in business. There’s insufficient information, people who may be acting deceitful, and the stakes are high. But unlike a lot of other decisions, figuring out if you made the right one is hard. Managers make a lot of decisions, and the relationship between those decisions and the security outcomes is hard to understand.

The trouble is, in security, we like to hide our decisions, and hide the outcomes of our decisions. As a result, we never learn. And employees keep saying “I told you so” about controls that may or may not help. As I said at BSides Las Vegas, companies are already paying the PR price, they need to move to talking about what happened.

With that information, we can do better at evaluating controls, and moving from opinions about what works (with the attendant “I told you so”) to evidence about effective investments in security.

Jolt Award for Threat Modeling

Jolt Awards Best Books
I am super-pleased to report that Threat Modeling: Designing for Security has been named a Jolt Finalist, the first security-centered book to make that list since Schneier’s Secrets and Lies in 2001.

My thanks to the judges, most especially to Gastón Hillar for the constructive criticism that “Unluckily, the author has chosen to focus on modeling and didn’t include code samples in the book. Code samples would have been very useful to make the subject clearer for developers who must imagine in their own lines of code how some of the attacks are performed.” He also says “Overall, this is an excellent volume that should be examined by most developers concerned with issues of security.” The full review is at “Jolt Finalist: Threat Modeling.”

Congratulations are also due to Mark Summerfield who won the Jolt Award for Python in Practice, Michael Mikowski and Josh Powell for their Jolt Productivity Award for Single Page Web Applications: JavaScript End-to-End and Bjarne Stroustrup for his Jolt Productivity Award: Programming: Principles and Practice Using C++ (2nd Edition). (I am especially consoled to have come in behind Stroustrup.)

BSides LV: Change Industry Or Change Professionals?

All through the week of BSides/BlackHat/Defcon, people came up to me to tell me that they enjoyed my BSides Las Vegas talk. (Slides, video). It got some press coverage, including an article by Jon Evans of TechCrunch, “Notes From Crazytown, Day One: The Business Of Fear.” Mr. Evans raises an interesting point: “the computer security industry is not in the business of openness, it is in the business of fear.” I’ve learned to be happy when people surface reasons that what I’m saying won’t work, because it gives us an opportunity to consider and overcome those objections.

At one level, Mr Evans is correct. Much of the computer security industry is in the business of fear. There are lots of incentives for the industry, many of which take us in wrong directions. (Mr. Evans acknowledges the role of the press in this; I appreciate his forthrightness, and don’t know what to do about that aspect of the problem, beyond pointing out that “company breached” is on its unfortunate way to being the new “dog bites man.”)

But I wasn’t actually asking the industry to change. I was asking professionals to change. And while that may appear to be splitting hairs, there’s an important reason that I ask people to consider issues of efficacy and burnout. That reason is I think we can incentivize people to act in their own long term interest. It’s challenging, and that’s why, after talking to behavior change specialists, I chose to use a set of commitment devices to get people to commit to pushing organizations to disclose more.

I want to be super-clear on my request, because based on feedback, not everyone understood my request. I do not expect everyone who tries to succeed. [#include Yoda joke.] All I’m asking people to do is to push their organizations to do better. To share root cause analyses. Because if we do that, we will make things better.

It’s not about the industry, it’s about the participants. It’s about you and me. And we can do better, and in doing so, we can make things better.

CERT, Tor, and Disclosure Coordination

There’s been a lot said in security circles about a talk on Tor being pulled from Blackhat. (Tor’s comments are also worth noting.) While that story is interesting, I think the bigger story is the lack of infrastructure for disclosure coordination.

Coordinating information about vulnerabilities is a socially important function. Coordination makes it possible for software creators to create patches and distribute them so that those with the software can most easily protect themselves.

In fact, the function is so important that it was part of why CERT was founded to: “coordinate response to internet security incidents.” Now, incidents has always been more than just vulnerabilities, but vulnerability coordination was a big part of what CERT did.

The trouble is, it’s not a big part anymore. [See below for a clarification added August 21.] Now “The CERT Division works closely with the Department of Homeland Security (DHS) to meet mutually set goals in areas such as data collection and mining, statistics and trend analysis, computer and network security, incident management, insider threat, software assurance, and more.” (Same “about” link as before.)

This isn’t the first time that I’ve heard about an issue where CERT wasn’t able to coordinate disclosure. I want to be clear, I’m not critiquing CERT or their funders here. They’ve set priorities and strategies in a way that makes sense to them, and as far as I know, there’s been precious little pressure to have a vuln coordination function.

It’s time we as a security community talk about the infrastructure, not as a flamewar over coordination/responsibility/don’t blow your 0day, but rather, for those who would like to coordinate, how should they do so?

Heartbleed is an example of what can happen with an interesting vulnerability and incomplete coordination. (Thanks to David Mortman for pointing that out in reviewing a draft.) Systems administrators woke up Monday morning to incomplete information, a marketing campaign, and a slew of packages that hadn’t been updated.

Disclosure coordination is hard to do. There’s a lot of project management and cross-organizational collaboration. Doing that work requires a special mix of patience and urgency, along with an unusual mix of technical skill with diplomatic communication. Those requirements mean that the people who do the work are rare and expensive. What’s more, it’s helpful to have these people seated at a relatively neutral party. (In the age of governments flooding money into cyberwar, it’s not clear if there are any truly neutral parties available. Some disclosure coordination is managed by big companies with a stake in the issue, which is helpful, but it’s hard for researchers to predict and depend apon.) These issues are magnified because those who are great at vulnerability research rarely spend time to develop those skills, and so an intermediary is even more valuable.

Even setting that aside, is there anyone who’s stepping up to the plate to help researchers effectively coordinate and manage disclosure?

[Update: I had a good conversation with some people at CERT/CC, in which I learned that they still do coordination work, especially in cases where the vendor is dealing with an issue for the first time, the issue is multi-vendor, or where there’s a conflict between vendor and researcher. The best way to do that is via their vulnerability reporting form, but if things don’t work, you can also email cert@cert.org or call their hotline (the number is on their contact us page.]

[Update 2, Dec 2014: Allen Householder has a really interesting model of “Vulnerability Coordination and Concurrency Modeling” on the CERT/CC blog, which shows some of the complexity involved, and touches on topics above. Oh, and some really neat Petri-dish state models.]

#Apollo45

July 20, 1969.

I’ve blogged about it before.

There are people who can write eloquently about events of such significance.  I am not one of them.  I hope that doesn’t stand in the way of folks remembering the amazing accomplishment that the Apollo program was.

 

Etsy's Threat Modeling

Gabrielle Gianelli has pulled back the curtain on how Etsy threat modeled a new marketing campaign. (“Threat Modeling for Marketing Campaigns.”)

I’m really happy to see this post, and the approach that they’ve taken:

First, we wanted to make our program sustainable through proactive defenses. When we designed the program we tried to bake in rules to make the program less attractive to attackers. However, we didn’t want these rules to introduce roadblocks in the product that made the program less valuable from users’ perspectives, or financially unsustainable from a business perspective.

Gabrielle apologizes several times for not giving more specifics, eg:

I have to admit upfront, I’m being a little ambiguous about what we’ve actually implemented, but I believe it doesn’t really matter since each situation will differ in the particulars.

I think this is almost exactly right. I could probably tell you about the specifics of the inputs into the machine learning algorithms they’re probably using. Not because I’m under NDA to Etsy (I’m not), but because such specifics have a great deal of commonality. More importantly, and here’s where I differ, I believe you don’t want to know those specifics. Those specifics would be very likely to distract you from going from a model (Etsy’s is a good one) to the specifics of your situation. So I would encourage Etsy to keep blogging like this, and to realize they’re at a great level of abstraction.

So go read Threat Modeling for Marketing Campaigns

Mail Chaos

The mail system I’ve been using for the last 19 years is experiencing what one might call an accumulation of chaos, and so I’m migrating to a new domain, shostack.org.

You can email me at my firstname@shostack.org, and my web site is now at http://adam.shostack.org

I am sorry for any inconvenience this may cause.

[Update: A number of folks have asked what happened. The simple answer is technical debt associated with maintaining servers in the basement. No drama, just life.]

What Security Folks Can Learn from Doctors

Stefan Larson talks about “What doctors can learn from each other:”

Different hospitals produce different results on different procedures. Only, patients don’t know that data, making choosing a surgeon a high-stakes guessing game. Stefan Larsson looks at what happens when doctors measure and share their outcomes on hip replacement surgery, for example, to see which techniques are proving the most effective. Could health care get better — and cheaper — if doctors learn from each other in a continuous feedback loop? (Filmed at TED@BCG.)

Measuring and sharing outcomes of procedures? I’m sure our anti-virus software makes that unnecessary.

But you should watch the talk anyway — maybe someday you’ll need a new hip, and you’ll want to be able to confidently question the doctors draining you of evil humors.

Navigation