It’s Not The Crime, It’s The Coverup or the Chaos

Well, Richard Smith has “resigned” from Equifax.

The CEO being fired is a rare outcome of a breach, and so I want to discuss what’s going on and put it into context, which includes the failures at DHS, and Deloitte breach. Also, I aim to follow the advice to praise specifically and criticize in general, and break that pattern here because we can learn so much from the specifics of the cases, and in so learning, do better.

Smith was not fired because of the breach. Breaches happen. Executives know this. Boards know this. The breach is outside of their control. Smith was fired because of the post-breach chaos. Systems that didn’t work. Tweeting links to a scam site for two weeks. PINS that were recoverable. Weeks of systems saying “you may have been a victim.” Headlines like “Why the Equifax Breach Stings So Bad” in the NYTimes. Smith was fired in part because of the post-breach chaos, which was something he was supposed to control.

But it wasn’t just the chaos. It was that Equifax displayed so much self-centeredness after the breach. They had the chutzpah to offer up their own product as a remedy. And that self-dealing comes from seeing itself as a victim. From failing to understand how the breach will be seen in the rest of the world. And that’s a very similar motive to the one that leads to coverups.

In The New School Andrew and I discussed how fear of firing was one reason that companies don’t disclose breaches. We also discussed how, once you agree that “security issues” are things which should remain secret or shared with a small group, you can spend all your energy on rules for information sharing, and have no energy left for actual information sharing.

And I think that’s the root cause of “U.S. Tells 21 States That Hackers Targeted Their Voting Systems” a full year after finding out:

The notification came roughly a year after officials with the United States Department of Homeland Security first said states were targeted by hacking efforts possibly connected to Russia.

A year.

A year.

A year after states were first targeted. A year in which “Obama personally warned Mark Zuckerberg to take the threats of fake news ‘seriously.’” (Of course, the two issues may not have been provably linkable at the time.) But. A year.

I do not know what the people responsible for getting that message to the states were doing during that time, but we have every reason to believe that it probably had to do with (and here, I am using not my sarcastic font, but my scornful one) “rules of engagement,” “traffic light protocols,” “sources and methods” and other things which are at odds with addressing the issue. (End scornful font.) I understand the need for these things. I understand protecting sources is a key role of an intelligence service which wants to recruit more sources. And I also believe that there’s a time to risk those things. Or we might end up with a President who has more harsh words for Australia than the Philippines. More time for Russia than Germany.

In part, we have such a President because we value secrecy over disclosure. We accept these delays and view them as reasonable. Of course, the election didn’t turn entirely on these issues, but on our electoral college system, which I discussed at some length, including ways to fix it.

All of which brings me to the Deloitte breach, “Deloitte hit by cyber-attack revealing clients’ secret emails.” Deloitte, along with the others who make up the big four audit firms, gets access to its clients deepest secrets, and so you might expect that the response to the breach would be similar levels of outrage. And I suspect a lot of partners are making a lot of hat-in-hand visits to boardrooms, and contritely trying to answer questions like “what the flock were you people doing?” and “why the flock weren’t we told?” I expect that there’s going to be some very small bonuses this year. But, unlike our relationship with Equifax, boards do not feel powerless in relation to their auditors. They can pick and swap. Boards do not feel that the system is opaque and unfair. (They sometimes feel that the rules are unfair, but that’s a different failing.) The extended reporting time will likely be attributed to the deep analysis that Deloitte did so it could bring facts to its customers, and that might even be reasonable. After all, a breach is tolerable; chaos afterwards may not be.

The two biggest predictors of public outrage are chaos and coverups. No, that’s not quite right. The biggest causes are chaos and coverups. (Those intersect poorly with data brokerages, but are not limited to them.) And both are avoidable.

So what should you do to avoid them? There’s important work in preparing for a breach, and in preventing one.

  • First, run tabletop response exercises to understand what you’d do in various breach scenarios. Then re-run those scenarios with the principals (CEO, General Counsel) so they can practice, too.
  • To reduce the odds of a breach, realize that you need continuous and integrated security as part of your operational cycles. Move from focusing on pen tests, red teams and bug bounties to a focus on threat modeling, so you can find problems systematically and early.

I’d love to hear what other steps you think organizations often miss out on.

Breach Vouchers & Equifax 2017 Breach Links

[Thursday, September 21th is the latest of 5 updates.]

When I wrote “The Breach Response Market Is Broken,” I didn’t expect one of the players to validate everything I had to say. What I said was that the very act of firms contracting with breach response services inhibit the creation of a market for breach response, and the FTC should require them to give vouchers to consumers.

Vice Motherboard is reporting that “Firm Hired to Monitor Data Breaches Is Hacked, 143 Million Social Security Numbers Stolen.”

It’s not clear what database was accessed. On their website, Equifax says “No Evidence of Unauthorized Access to Core Consumer or Commercial Credit Reporting Databases” and “Company to Offer Free Identity Theft Protection and Credit File Monitoring to All U.S. Consumers.”

But here’s the thing; I don’t trust Equifax to protect data that … they just failed to protect. I want protection from an independent firm.

Equifax’s self-dealing in providing breach response services is unfair. No rational, well-informed consumer would select Equifax’s service in this situation. Equifax’s offering of credit file monitoring to all US consumers is also an unfair trade practice, which undercuts innovation, and limits the ability of new entrants to deliver effective services.

The FTC should require Equifax to send a voucher to each impacted individual which can be used to purchase any identity theft protection service on the market as of August, 2017.


Usually I don’t try to blog fast moving stories, but I may make an exception.

Update 1, later that day:

Update 2, Sept 9:

  • The International Business Times reports “Equifax Lobbied To Kill Rule Protecting Victims Of Data Breaches.” They report Equifax wrote “a rule blocking companies from forcing their customers to waive class action rights would expose credit agencies ‘to unmanageable class action liability that could result in full disgorgement of revenues’ if companies are found to have illegally harmed their customers.” It’s a nice life, having the government block your victims from suing you, especially if you’re worried that the harm is great enough to result in ‘full disgorgement of revenues.’ Now, you might argue that’s hyperbole, but maybe it’s a real fear.
  • The Onion reports “Equifax Impressed By Hackers’ Ability To Ruin People’s Finances More Efficiently Than Company Can.”
  • Equifax once brought me to a Nine Inch Nails concert, and under the payola rules, I ought to have disclosed that when writing about them. It was over a decade ago, and had slipped my mind.

Update 3, Sept 12:

Update 4, September 16:

Update 5, September 21:

Yahoo! Yippee? What to Do?

[Dec 20 update: The first draft of this post ended up with both consumer and enterprise advice, which made it complex. The enterprise half is now on the IANS blog: Never Waste a Good Crisis: Yahoo Edition.]

Yesterday, Yahoo disclosed that attackers broke into Yahoo in 2013 and stole details on a billion accounts. Brian Krebs summarizes what was taken, and also has a more general FAQ.

The statement says that for “potentially affected accounts, the stolen user account information may have included names, email addresses, telephone numbers, dates of birth, hashed passwords (using MD5) and, in some cases, encrypted or unencrypted security questions and answers.”

Yahoo says users should change their passwords and security questions and answers for any other accounts on which they used the same or similar information used for their Yahoo account.

The New York Times has an article “How Many Times Has Your Personal Information Been Exposed to Hackers?

The big question is “How can you protect yourself in the future?” The Times is right to ask it, and their answer starts:

It’s pretty simple: You can’t. But you can take a few steps to make things harder for criminals. Turn on two-factor authentication, whenever possible. Most banking sites and ones like Google, Apple, Twitter and Facebook offer two-factor authentication. Change your passwords frequently and do not use the same password across websites.

I think the Times makes two important “mistakes” in this answer. [Update: I think mistake may be harsher than I mean: I wish they’d done differently.]

The first mistake is to not recommend a password manager. Using a password manager is essential to using a different password on each website. I use 1Password, and recommend it. I also use it to generate random answers to “security questions” and use 1Password’s label/data fields to store those. I do hope that one day they start managing secret questions, but understand that that’s tricky because secret questions are not submitted to the web with standard HTML form names.

The reason I recommend 1Password is that it works well without the cloud, and that means that a cloud provider cannot disclose my passwords. They also can’t disclose my encrypted passwords, where encrypting them is a mitigation for that first-layer information disclosure threat. (One of these days I should write up my complete password manager threat model.) These threats are important and concrete. 1Password competitor Lastpass has repeatedly messed this up, and those problems are made worse by their design of mandatory centralization.

It’s not to say that 1Password is perfect. Tavis Ormandy has said “More password manager bugs out today and more due out soon. I’m not going to look at more, the whole industry is crazy,” and commented on 1Password with a GIF. Some of those issues have now been revealed. (Tavis is very, very good at finding security flaws, and this worries me a bit.)

But: authentication is hard. You must make a risk tradeoff. The way I think about the risk tradeoff is:

  • If I use a single password, it’s easily compromised in many places. (Information disclosure threats at each site, and in my browser.)
  • If I use a paper list, an attacker who compromises my browser can likely steal most of my passwords.
  • If I use a cloud list, an attacker who breaks into that cloud can steal the list. If the list is encrypted, then they can still attack it offline. If the cloud design either sends my master password to the cloud, or javascript to the client, then my master password is vulnerable to an attacker who has broken into the cloud.
  • If I use a paper list, I can’t back it up easily. (My backups are on my phone, and in a PGP encrypted file on a cloud provider.)

So 1Password is the least bad of currently available options, and the Times should have put a stake in the ground on the subject. (Or perhaps their new “Wirecutter” division should take a look. Oh wait! They did. I disagree with their assessment, as stated above.)

The second big mistake is to assert that you can’t fully protect yourself in a simple, declarative sentence at the end of their answer. What’s that you say? It’s not the end of their answer? But it is. In today’s short attention-span world, you see those words and stop. You move on. It’s important that security advice be actionable.

So: use a password manager. Lie in your answers to “secret questions.” Tell different sites different lies. Use a password manager to remember them.

The Breach Response Market Is Broken (and what could be done)

Much of what Andrew and I wrote about in the New School has come to pass. Disclosing breaches is no longer as scary, nor as shocking, as it was. But one thing we expected to happen was the emergence of a robust market of services for breach victims. That’s not happened, and I’ve been thinking about why that is, and what we might do about it.

I submitted a short (1 1/2 page) comment for the FTC’s PrivacyCon, and the FTC has published that here.

[Update Oct 19: I wrote a blog post for IANS, “After the Breach: Making Your Response Count“]

[Update Nov 21: the folks at Abine decided to run a survey, and asked 500 people what they’d like to see a breach notice letter. Their blog post.]

Threat Modeling Crypto Back Doors

Today, the Open Technology Institute released an open letter to the President of the United States from a broad set of organizations and experts, and I’m pleased to be a signer, and agree wholeheartedly with the text of the letter. (Some press coverage.)

I did want to pile on with an excerpt from chapter 9 of Threat Modeling: Designing for Security:

For another example of comparative threat modeling, consider the two systems shown in Figures 9-2 and 9-3. Figure 9-2 depicts an e-mail system, and Figure 9-3 is a version of 9-2 with a “lawful intercept” module added. (“Lawful intercept” is an Orwellian phrase for “thing which allows people to bypass the security features of your system.” Setting aside any arguments of “should we as a society have such a mechanism?” it’s possible to assess the technical security implications of adding such mechanisms.)

It should be obvious that Figure 9-2 is more secure than Figure 9-3. Using software-centric modeling, Figure 9-3 adds two data flows and a process; thus, by STRIDE-per-element, it has an additional 12 threats (tampering, information disclosure, DoS with each flow, for 6; and the six S,T,R, I, D, and E threats against the process for a total of 12). Additionally, Figure 9-3 has two apparent groupings of elevation-of-privilege threats: those posed by outsiders and those posed by software-allowed, but human-policy-violating, use. Thus, if Figure 9-2 has a list of threats (1…n), then Figure 9-3 has a list of threats (1…n+14).

A Lawful Access Threat Model

If instead of software-centric modeling you use attacker-centered modeling on the systems shown in Figures 9-2 and 9-3, you find two sets of threats: First, each law enforcement agency that is authorized to connect adds its employees and IT systems as possible threats, and possible threat vectors. Second, attackers are likely to attack these features of the system to abuse them. The 2010 “Aurora” attacks on Google and others allegedly did exactly this (McMillan, 2010, and Adida, 2013). Thus, by comparing them you can see that the addition of these features creates additional risk. You might also wonder where those risks fall, but that’s outside the scope of this example.

More subtly, the addition of the code in Figure 9-3 is an obvious source of security vulnerabilities. As such, it may draw attention and possibly effort away from the rest of the system. Thus, the components that comprise Figure 9-2 are likely to be less secure, even ignoring the threats to the additional components. In the same vein, the requests and implementations for such back- doors may be confidential or classified. If that’s the case, the features may not go through normal tracking for implementation, testing, or review, again reducing the odds that they are secure. Of course, because such a system is designed to bypass other security controls, any weaknesses are likely to have outsized impact.

The technical arguments are simple. All other things being equal, systems with backdoors are unavoidably less secure, and probably worse than that. American companies cannot be competitive if the government forces us to add them.

[Update, July 7, 2015: A group of experts has released a longer paper, “Keys Under The Doormat.” Blogs from Ross Anderson and Steve Bellovin give additional perspective.]

The Onion and Breach Disclosure

There’s an important and interesting new breach disclosure that came out yesterdau. It demonstrates leadership by clearly explaining what happened and offering up lessons learned.

In particular:

  • It shows the actual phishing emails
  • It talks about how the attackers persisted their takeover by sending a fake “reset your password” email (more on this below)
  • It shows the attacker IP address (46.17.103.125)
  • It offers up lessons learned

Unfortunately, it offers up some Onion-style ironic advice like “Make sure that your users are educated, and that they are suspicious of all links that ask them to log in.” I mean, “Local man carefully checks URLs before typing passwords.” Better advice would be to have bookmarks for the sites you need to log-in to, or to use a password manager that knows what site you’re on.

The reset your password email is also fascinating. (“The attacker used their access to a different, undiscovered compromised account to send a duplicate email which included a link to the phishing page disguised as a password-reset link. This dupe email was not sent to any member of the tech or IT teams, so it went undetected. “) It shows that the attackers were paying attention, and it allows us to test the idea that, ummm, local man checks URLs before typing passwords.

Of course, I shouldn’t be too harsh on them, since the disclosure was, in fact, by The Onion, who is now engaged in cyberwar with the Syrian Electronic Army. The advice they offer is of the sort that’s commonly offered up after a breach. With more breaches, we’ll see details like “they used that account to send the same email to more Onion staff at about 2:30 AM.” Do you really expect your staff to be diligently checking URLs when it’s 2:30 AM?

Whatever you think, you should read “How the Syrian Electronic Army Hacked The Onion,” and ask if your organization would do as well.

MD5s, IPs and Ultra

So I was listening to the Shmoocon presentation on information sharing, and there was a great deal of discussion of how sharing too much information could reveal to an attacker that they’d been detected. I’ve discussed this problem a bit in “The High Price of the Silence of Cyberwar,” but wanted to talk more about it. What struck me is that the audience seemed to be thinking that an MD5 of a bit of malware was equivalent to revealing the Ultra intelligence taken from Enigma decrypts.

Now perhaps that’s because I’m re-reading Neal Stephenson’s Cryptonomicon, where one of the subplots follows the exploits of Unit 2702, dedicated to ensuring that use of Ultra is explainable in other ways.

But really, it was pretty shocking to hear people nominally dedicated to the protection of systems actively working to deny themselves information that might help them detect an intrusion faster and more effectively.

For an example of how that might work, read “Protecting People on Facebook.” First, let me give kudos to Facebook for revealing an attack they didn’t have to reveal. Second, Facebook says “we flagged a suspicious domain in our corporate DNS logs.” What is a suspicious domain? It may or may not be one not seen before. More likely, it’s one that some other organization has flagged as malicious. When organizations reveal the IP or domain names of command and control servers, it gives everyone a chance to learn if they’re compromised. It can have other positive effects. Third, it reveals a detection method which actually caught a bad guy, and that you might or might not be using. Now you can consider if you want to invest in dns logging.

Now, there’s a time to be quiet during incident response. But there’s very real a tradeoff to be made between concealing your knowledge of a breach and aiding and abetting other breaches.

Maybe it’s time for us to get angry when a breach disclosure doesn’t include at least one IP and one MD5? Because when the disclosure doesn’t include those facts, our ability to defend ourselves is dramatically reduced.

Indicators of Impact — Ground Truth for Breach Impact Estimation

Ice bag might be a good Indicator of Impact for a night of excess.
Ice bag might be a good ‘Indicator of Impact’ for a night of excess.

One big problem with existing methods for estimating breach impact is the lack of credibility and reliability of the evidence behind the numbers. This is especially true if the breach is recent or if most of the information is not available publicly.  What if we had solid evidence to use in breach impact estimation?  This leads to the idea of “Indicators of Impact” to provide ‘ground truth’ for the estimation process.

It is premised on the view that breach impact is best measured by the costs or resources associated with response, recovery, and restoration actions taken by the affected stakeholders.  These activities can included both routine incident response and also more rare activities.  (See our paper for more.)  This leads to to ‘Indicators of Impact’, which are  evidence of the existence or non-existence of these activities. Here’s a definition (p 23 of our paper):

An ‘Indicator of Impact’ is an observable event, behavior, action, state change, or communication that signifies that the breached or affected organizations are attempting to respond, recover, restore, rebuild, or reposition because they believe they have been harmed. For our purposes, Indicators of Impact are evidence that can be used to estimate branching activity models of breach impact, either the structure of the model or key parameters associated with specific activity types. In principle, every Indicator of Impact is observable by someone, though maybe not outside the breached organization.

Of course, there is a close parallel to the now-widely-accepted idea of “Indicators of Compromise“, which are basically technical traces associated with a breach event.  There’s a community supporting an open exchange format — OpenIoC.  The big difference is that Indicators of Compromise are technical and are used almost exclusively in tactical information security.  In contrast, Indicators of Impact are business-oriented, even if they involve InfoSec activities, and are used primarily for management decisions.

From the Appendix B, here are a few examples:

  • Was there a forensic investigation, above and beyond what your organization would normally do?
  • Was this incident escalated to the executive level (VP or above), requiring them to make resource decisions or to spend money?
  • Was any significant business process or function disrupted for a significant amount of time?
  • Due to the breach, did the breached organization fail to meet any contractual obligations with it customers, suppliers, or partners? If so, were contractual penalties imposed?
  • Were top executives or the Board significantly diverted by the breach and aftermath, such that other important matters did not receive sufficient attention?

The list goes on for three pages in Appendix B but we fully expect it to grow much longer as we get experience and other people start participating.  For example, there will be indicators that only apply to certain industries or organization types.  In my opinion, there is no reason to have a canonical list or a highly structured taxonomy.

As signals, the Indicators of Impact are not perfect, nor do they individually provide sufficient evidence.  However, they have the very great benefit of being empirical, subject to documentation and validation, and potentially observable in many instances, even outside of InfoSec breach events.  In other words, they provide a ‘ground truth’ which has been sorely lacking in breach impact estimation. When assembled as a mass of evidence and using appropriate inference and reasoning methods (e.g. see this great book), Indicators of Impact could provide the foundation for robust breach impact estimation.

There are also applications beyond breach impact estimation.  For example, they could be used in resilience planning and preparation.  They could also be used as part of information sharing in critical infrastructure to provide context for other information regarding threats, attacks, etc. (See this video of a Shmoocon session for a great panel discussion regarding challenges and opportunities regarding information sharing.)

Fairly soon, it would be good to define a light-weight standard format for Indicators of Impact, possibly as an extension to VERIS.  I also think that Indicators of Impact could be a good addition to the up-coming NIST Cybersecurity Framework.  There’s a public meeting April 3rd, and I might fly out for it.  But I will submit to the NIST RFI.

Your thoughts and comments?

New paper: "How Bad Is It? — A Branching Activity Model for Breach Impact Estimation"

Adam just posted a question about CEO “willingness to pay” (WTP) to avoid bad publicity regarding a breach event.  As it happens, we just submitted a paper to Workshop on the Economics of Information Security (WEIS) that proposes a breach impact estimation method that might apply to Adam’s question.  We use the WTP approach in a specific way, by posing this question to all affected stakeholders:

Ex ante, how much would you be willing to spend on response and recovery for a breach of a particular type?  Through what specific activities and processes?”

We hope this approach can bridge theoretical and empirical research, and also professional practice.  We also hope that this method can be used in public disclosures.

Paper: How Bad is it? – A Branching Activity Model to Estimate the Impact of Information Security Breaches

Infographic from the example in the paper
Infographic from the example in the paper

In the next few months we will be applying this to half a dozen historical breach episodes to see how it works out.  This model will also probably find its way into my dissertation as “substrate”.  The dissertation focus is on social learning and institutional innovation.

Comments and feedback are most welcome.

Paying for Privacy: Enterprise Breach Edition

We all know how companies don’t want to be named after a breach. Here’s a random question: how much is that worth to a CEO? What would a given organization be willing to pay to keep its name out of the press? (A-priori, with at best a prediction of how the press will react.) Please don’t say a lot, please help me quantify it.

Another way to ask this question: What should a business be willing to pay to not report a security breach?

(Bonus question: how is it changing over time?)