Introducing Cyber Portfolio Management

At RSA’17, I spoke on “Security Leadership Lessons from the Dark Side.”

Leading a security program is hard. Fortunately, we can learn a great deal from Sith lords, including Darth Vader and how he managed security strategy for the Empire. Managing a distributed portfolio is hard when rebel scum and Jedi knights interfere with your every move. But that doesn’t mean that you have to throw the CEO into a reactor core. “Better ways you will learn, mmmm?”

In the talk, I discussed how “security people are from Mars and business people are from Wheaton,” and how to overcome the communication challenges associated with that.

RSA has posted audio with slides, and you can take a listen at the link above. If you prefer the written word, I have a small ebook on Cyber Portfolio Management, a new paradigm for driving effective security programs. But I designed the talk to be the most entertaining intro to the subject.

Later this week, I’ll be sharing the first draft of that book with people who subscribe to my “Adam’s New Thing” mailing list. Adam’s New Thing is my announcement list for people who hate such things. I guarantee that you’ll get fewer than 13 messages a year.

Lastly, I want to acknowledge that at BSides San Francisco 2012, Kellman Meghu made the point that “they’re having a pretty good risk management discussion,” and that inspired the way I kicked off this talk.

2017 and Tidal Forces

There are two great blog posts at Securosis to kick off the new year:

Both are deep and important and worth pondering. I want to riff on something that Rich said:

On the security professional side I have trained hundreds of practitioners on cloud security, while working with dozens of organizations to secure cloud deployments. It can take years to fully update skills, and even longer to re-engineer enterprise operations, even without battling internal friction from large chunks of the workforce…

It’s worse than that. Yesterday Recently on Emergent Chaos, I talked about Red Queen Races, where you have to work harder and harder just to keep up.

In the pre-cloud world, you could fully update your skills. You could be an expert on Active Directory 2003, or Checkpoint’s Firewall-1. You could generate friction over moving to AD2012. You no longer have that luxury. Just this morning, Amazon launched a new rev of something. Google is pushing a new rev of its G-Suite to 5% of customers. Your skillset with the prior release is now out of date. (I have no idea if either really did this, but they could have.) Your skillset can no longer be a locked-in set of skills and knowledge. You need the meta-skills of modeling and learning. You need to understand what your model of AWS is, and you need to allocate time and energy to consciously learning about it.

That’s not just a change for individuals. It’s a change for how organizations plan for training, and it’s a change for how we should design training, as people will need lots more “what’s new in AWS in Q1 2017” training to augment “intro to AWS.”

Tidal forces, indeed.

Yahoo! Yippee? What to Do?

[Dec 20 update: The first draft of this post ended up with both consumer and enterprise advice, which made it complex. The enterprise half is now on the IANS blog: Never Waste a Good Crisis: Yahoo Edition.]

Yesterday, Yahoo disclosed that attackers broke into Yahoo in 2013 and stole details on a billion accounts. Brian Krebs summarizes what was taken, and also has a more general FAQ.

The statement says that for “potentially affected accounts, the stolen user account information may have included names, email addresses, telephone numbers, dates of birth, hashed passwords (using MD5) and, in some cases, encrypted or unencrypted security questions and answers.”

Yahoo says users should change their passwords and security questions and answers for any other accounts on which they used the same or similar information used for their Yahoo account.

The New York Times has an article “How Many Times Has Your Personal Information Been Exposed to Hackers?

The big question is “How can you protect yourself in the future?” The Times is right to ask it, and their answer starts:

It’s pretty simple: You can’t. But you can take a few steps to make things harder for criminals. Turn on two-factor authentication, whenever possible. Most banking sites and ones like Google, Apple, Twitter and Facebook offer two-factor authentication. Change your passwords frequently and do not use the same password across websites.

I think the Times makes two important “mistakes” in this answer. [Update: I think mistake may be harsher than I mean: I wish they’d done differently.]

The first mistake is to not recommend a password manager. Using a password manager is essential to using a different password on each website. I use 1Password, and recommend it. I also use it to generate random answers to “security questions” and use 1Password’s label/data fields to store those. I do hope that one day they start managing secret questions, but understand that that’s tricky because secret questions are not submitted to the web with standard HTML form names.

The reason I recommend 1Password is that it works well without the cloud, and that means that a cloud provider cannot disclose my passwords. They also can’t disclose my encrypted passwords, where encrypting them is a mitigation for that first-layer information disclosure threat. (One of these days I should write up my complete password manager threat model.) These threats are important and concrete. 1Password competitor Lastpass has repeatedly messed this up, and those problems are made worse by their design of mandatory centralization.

It’s not to say that 1Password is perfect. Tavis Ormandy has said “More password manager bugs out today and more due out soon. I’m not going to look at more, the whole industry is crazy,” and commented on 1Password with a GIF. Some of those issues have now been revealed. (Tavis is very, very good at finding security flaws, and this worries me a bit.)

But: authentication is hard. You must make a risk tradeoff. The way I think about the risk tradeoff is:

  • If I use a single password, it’s easily compromised in many places. (Information disclosure threats at each site, and in my browser.)
  • If I use a paper list, an attacker who compromises my browser can likely steal most of my passwords.
  • If I use a cloud list, an attacker who breaks into that cloud can steal the list. If the list is encrypted, then they can still attack it offline. If the cloud design either sends my master password to the cloud, or javascript to the client, then my master password is vulnerable to an attacker who has broken into the cloud.
  • If I use a paper list, I can’t back it up easily. (My backups are on my phone, and in a PGP encrypted file on a cloud provider.)

So 1Password is the least bad of currently available options, and the Times should have put a stake in the ground on the subject. (Or perhaps their new “Wirecutter” division should take a look. Oh wait! They did. I disagree with their assessment, as stated above.)

The second big mistake is to assert that you can’t fully protect yourself in a simple, declarative sentence at the end of their answer. What’s that you say? It’s not the end of their answer? But it is. In today’s short attention-span world, you see those words and stop. You move on. It’s important that security advice be actionable.

So: use a password manager. Lie in your answers to “secret questions.” Tell different sites different lies. Use a password manager to remember them.

Learning from Our Experience, Part Z

One of the themes of The New School of Information Security is how other fields learn from their experiences, and how information security’s culture of hiding our incidents prevents us from learning.

Zombie survival guide

Today I found yet another field where they are looking to learn from previous incidents and mistakes: zombies. From “The Zombie Survival Guide: Recorded Attacks:”

Organize before they rise!

Scripted by the world’s leading zombie authority, Max Brooks, Recorded Attacks reveals how other eras and cultures have dealt with–and survived– the ancient viral plague. By immersing ourselves in past horror we may yet prevail over the coming outbreak in our time.

Of course, we don’t need to imagine learning from our mistakes. Plenty of fields do it, and so don’t shamble around like zombies.

The Breach Response Market Is Broken (and what could be done)

Much of what Andrew and I wrote about in the New School has come to pass. Disclosing breaches is no longer as scary, nor as shocking, as it was. But one thing we expected to happen was the emergence of a robust market of services for breach victims. That’s not happened, and I’ve been thinking about why that is, and what we might do about it.

I submitted a short (1 1/2 page) comment for the FTC’s PrivacyCon, and the FTC has published that here.

[Update Oct 19: I wrote a blog post for IANS, “After the Breach: Making Your Response Count“]

[Update Nov 21: the folks at Abine decided to run a survey, and asked 500 people what they’d like to see a breach notice letter. Their blog post.]

Why Don't We Have an Incident Repository?

Steve Bellovin and I provided some “Input to the Commission on Enhancing National Cybersecurity.” It opens:

We are writing after 25 years of calls for a “NTSB for Security” have failed to result in action. As early as 1991, a National Research Council report called for “build[ing] a repository of incident data” and said “one possible model for data collection is the incident reporting system administered by the National Transportation Safety Board.” [1] The calls for more data about incidents have continued, including by us [2, 3].

The lack of a repository of incident data impacts our ability to answer or assess many of your questions, and our key recommendation is that the failure to establish such a repository is, in and of itself, worthy of study. There are many factors in the realm of folklore as to why we do not have a repository, but no rigorous answer. Thus, our answer to your question 4 (“What can or should be done now or within the next 1-2 years to better address the challenges?”) is to study what factors have inhibited the creation of a repository of incident data, and our answer to question 5 (“what should be done over a decade?”) is to establish one. Commercial air travel is so incredibly safe today precisely because of decades of accident investigations, investigations that have helped plane manufacturers, airlines, and pilots learn from previous failures.

What Boards Want in Security Reporting

Sub optimal dashboard 3

Recently, some of my friends were talking about a report by Bay Dynamics, “How Boards of Directors Really Feel About Cyber Security Reports.” In that report, we see things like:

More than three in five board members say they are both significantly or very “satisfied” (64%) and “inspired”(65%) after the typical presentation by IT and security executives about the company’s cyber risk, yet the majority (85%) of board members
believe that IT and security executives need to improve the way they report to the board.”
Only one-third of IT and security executives believe the board comprehends the cyber security information provided to them (versus) 70% of board members surveyed report that they understand everything they’re being told by IT and security executives in their presentations

Some of this is may be poor survey design or reporting: it’s hard to survey someone to see if they don’t understand, and the questions aren’t listed in the survey.

But that may be taking the easy way out. Perhaps what we’re being told is consistent. Security leaders don’t think the boards are getting the nuance, while the boards are getting the big picture just fine. Perhaps boards really do want better reporting, and, having nothing useful to suggest, consider themselves “satisfied.”

They ask for numbers, but not because they really want numbers. I’ve come to believe that the reason they ask for numbers is that they lack a feel for the risks of cyber. They understand risks in things like product launches or moving manufacturing to China, or making the wrong hire for VP of social media. They are hopeful that in asking for numbers, they’ll learn useful things about the state of what they’re governing.

So what do boards want in security reporting? They want concrete, understandable and actionable reports. They want to know if they have the right hands on the rudder, and if those hands are reasonably resourced. (Boards also know that no one who reports to them is every really satisfied with their budget.)

(Lastly, the graphic? Overly complex, not actionable, lacks explicit recommendations or requests. It’s what boards don’t want.)

FBI says their warnings were ignored

There’s two major parts to the DNC/FBI/Russia story. The first part is the really fascinating evolution of public disclosures over the DNC hack. We know the DNC was hacked, that someone gave a set of emails to Wikileaks. There are accusations that it was Russia, and then someone leaked an NSA toolkit and threatened to leak more. (See Nick Weaver’s “NSA and the No Good, Very Bad Monday,” and Ellen Nakishima’s “Powerful NSA hacking tools have been revealed online,” where several NSA folks confirm that the tool dump is real. See also Snowden’s comments “on Twitter:” “What’s new? NSA malware staging servers getting hacked by a rival is not new. A rival publicly demonstrating they have done so is.”) That’s not the part I want to talk about.

The second part is what the FBI knew, how they knew it, who they told, and how. In particular, I want to look at the claims in “FBI took months to warn Democrats[…]” at Reuters:

In its initial contact with the DNC last fall, the FBI instructed DNC personnel to look for signs of unusual activity on the group’s computer network, one person familiar with the matter said. DNC staff examined their logs and files without finding anything suspicious, that person said.

When DNC staffers requested further information from the FBI to help them track the incursion, they said the agency declined to provide it.
[…]
“There is a fine line between warning people or companies or even other government agencies that they’re being hacked – especially if the intrusions are ongoing – and protecting intelligence operations that concern national security,” said the official, who spoke on condition of anonymity.

Let me repeat that: the FBI had evidence that the DNC was being hacked by the Russians, and they said “look around for ‘unusual activity.'”

Shockingly, their warning did not enable the DNC to find anything.

When Rob Reeder, Ellen Cram Kowalczyk and I did work on usability of warnings, we recommended they be explanatory, actionable and tested. This warning fails on all those counts.

There may be a line, or really, a balancing act, around disclosing what the FBI knows, and ensuring that how they know it is protected. (I’m going to treat the FBI as the assigned mouthpiece, and move to discussing the US government as a whole, because otherwise we may rat hole on authorities, US vs non-US activity, etc, which are a distraction). Fundamentally, we can create a simple model of how the US government learns about these hacks:

  • Network monitoring
  • Kill chain-driven forensics
  • Agents working at the attacker
  • “Fifth party take” where they’ve broken into a spy server and are reading what those spies take.*

*This “fifth party take”, to use the NSA’s jargon, is what makes the NSA server takeover so interesting and relevant. Is the release of the NSA files a comment that the GRU knows that the NSA knows about their hack because the GRU has owned additional operational servers?)

Now, we can ask, if the FBI says “look for connections to Twitter when there’s no one logged into Alice’s computer,” does it allow the attacker to distinguish between those three methods?

No.

Now, it does disclose that that C&C pathway is known, and if the attacker has multiple paths, then it might be interesting to know that only one was detected. But there’s another tradeoff, which is that as long as the penetration is active, the US government can continue to find indicators, and use them to find other break-ins. That’s undeniably useful to the FBI, at the cost of the legitimacy of our electoral processes. That’s a bad tradeoff.

We have to think about and discuss priorities and tradeoffs. We need to talk about the policy which the FBI is implementing, which seems to be to provide un-actionable, useless warnings. Perhaps that’s sufficient in some eyes.

We are not having a policy discussion about these tradeoffs, and that’s a shame.

Here are some questions that we can think about:

  • Is the model presented above of how attacks are detected reasonable?
  • Is there anything classified which changes the general debate? (No, we learned that from the CRISIS report.)
  • What should a government warning include? A single IOC? Some fraction in a range (say 25-35%)? All known IOCs? (Using a range is interesting because it reduces information leakage back to an attacker who’s compromised a source.)
  • How do we get IOCs to be bulk declassified so they can be used at organizations whose IT staff do not have clearances, cannot get clearances rapidly, and post-OPM ain’t likely to?

That’s a start. What other questions should we be asking so we can move from “Congressional leaders were briefed a year ago on hacking of Democrats” to “hackers were rebuffed from interfering in our elections” or, “hackers don’t even bother trying to attack election?”

[Update: In “AS FBI WARNS ELECTION SITES GOT HACKED, ALL EYES ARE ON RUSSIA“, Wired links to an FBI Flash, which has an explicit set of indicators, including IPs and httpd log entries, along with explicit recommendations such as “Search logs for commands often passed during SQL injection.” This is far more detail than was in these documents a few years ago, and far more detail than I expected when I wrote the above.]

Consultants Say Their Cyber Warnings Were Ignored

Back in October, 2014, I discussed a pattern of “Employees Say Company Left Data Vulnerable,” and its a pattern that we’ve seen often since. Today, I want to discuss the consultant’s variation on the story. This is less common, because generally smart consultants don’t comment on the security of their consultees. In this case, it doesn’t seem like the consultant’s report was leaked, but people are discussing it after a high-profile issue.

In brief, the DNC was hacked, probably by Russian intelligence, and emails were given to Wikileaks. Wikileaks published them without redacting things like credit card numbers or social security numbers. The head of the DNC has stepped down. (This is an unusual instance of someone losing their job, which is rare post-breach. However, she did not lose her job because of the breach, she lost it because the breach included information about how her organization tilted the playing field, and how she lied about doing so.)

This story captures a set of archetypes. I want to use this story as a foil for those archetypes, not to critique any of the parties. I’ll follow the pattern from “employess vs company” present those three sections: “I told you so”, “potential spending”, and “how to do better.” I also comments on preventability and “shame.”

Was it preventable?

Computer security consultants hired by the DNC made dozens of recommendations after a two-month review, the people said. Following the advice, which would typically include having specialists hunt for intruders on the network, might have alerted party officials that hackers had been lurking in their network for weeks… (“Democrats Ignored Cybersecurity Warnings Before Theft,” Michael Riley, Bloomberg.)

People are talking about this as if the DNC was ever likely to stop Russian intelligence from breaking into their computers. That’s a very, very challenging goal, one at which both US and British intelligence have failed. (And as I write this, an FBI agent has been charged with espionage on behalf of China.) There’s a lot of “might,” “could have,” and other words that say “possible” without assessing “probable.”

I told you so!

The report included “dozens of recommendations,” some of which, such as “taking special precautions to protect any financial information related to donors” might be a larger project than a PCI compliance initiative. (The logic is that financial information collected by a party is more than just card numbers; there seems to be a lot of SSNs in the data as well). If one recommendation is “get PCI compliant,” than “dozens of recommendations” might be a Sysyphean task, or perhaps the Agean Stables are a better analogy. In either case, only in mythology can the actions be completed.

Missing from the discussion I’ve seen so far is any statement of what was done. Did the organization do the top-5 things the consultants said to do? (Did they even break things out into a top-5?)

Potential Spending

The review found problems ranging from an out-of-date firewall to a lack of advanced malware detection technology on individual computers, according to two of the people familiar with the matter.

It sounds like “advanced malware detection technology” would be helpful here, right? Hindsight is 20:20. An out-of-date firewall? Was it missing security updates (which would be worrisome, but less worrisome than one might think, depending on what those updates fix), or was it just not the latest revision? If it’s not the latest revision, it can probably still do its job. In carefully reading the article, I saw no evidence that any single recommendation, or even implementing all of them, would have prevented the breach.

The DNC is a small organization. They were working with a rapidly shifting set of campaign workers working for the Sanders and Clinton campaigns. I presume they’re also working on a great many state races, and the organizations those politicians are setting up.

I do not believe that doing everything in the consultant’s report could reasonably be expected to prevent a breakin by a determined mid-sized intelligence agency.

Shame

“Shame on them. It looks like they just did the review to check a box but didn’t do anything with it,” said Ann Barron-DiCamillo, who was director of US-Cert, the primary agency protecting U.S. government networks, until last February. “If they had acted last fall, instead of those thousands of e-mails exposed it might have been much less.”

Via Meredith Patterson, I saw “The Left’s Self-Destructive Obsession with Shame,” and there’s an infosec analog. Perhaps they would have found the attackers if they’d followed the advice, perhaps not. Does adding shame work to improve the cybers? If it did, it should have done so by now.

How to do better

I stand by what I said last time. The organization has paid the PR price, and we have learned nothing. What a waste. We should talk about exactly what happened at a technical level.

We should stop pointing fingers and giggling. It isn’t helping us. In many ways, the DNC is not so different from thousands of other small or mid-size organizations who hold sensitive information. Where is the list of effective practice for them to follow? How different is the set of recommendations in this instance from other instances? Where’s the list of “security 101” that the organization should have followed before hiring these consultants? (My list is at “Security 101: Show Your List!.”)

We should define the “smoke alarms and sprinklers” of cyber. Really, why does an organization like the DNC need to pay $60,000 for a cybersecurity policy? It’s a little hard to make sense of, but I think that the net operating expenditures of $5.1m is a decent proxy for the size of the organization, and (if that’s right) 1% of their net operating expenses went to a policy review. Was that a good use of money? How else should they have spent that? What concrete things do we, as a profession or as a community, say they should have done? Is there a reference architecture, or a system in a box which they could have run that we expect would resist Russian intelligence?

We cannot expect every small org to re-invent this wheel. We have to help them better.

A New Way to Tie Security to Business

A woman with a chart saying that's ok, i don't know what it means either

As security professionals, sometimes the advice we get is to think about the security controls we deploy as some mix of “cloud access security brokerage” and “user and entity behavioral analytics” and “next generation endpoint protection.” We’re also supposed to “hunt”, “comply,” and ensure people have had their “awareness” raised. Or perhaps they mean “training,” but how people are expected to act post-training is often maddeningly vague, or worse, unachievable. Frankly, I have trouble making sense of it, and that’s before I read about how your new innovative revolutionary disruptive approach is easy to deploy to ensure that APT can’t get into my network to cloud my vision.

I’m making a little bit of a joke, because otherwise it’s a bit painful to talk about.

Really, we communicate badly. It hurts our ability to drive change to protect our organizations.

A CEO once explained his view of cyber. He said “security folks always jump directly into details that just aren’t important to me. It’s as if I met a financial planner and he started babbling about a mutual fund’s beta before he understood what my family needed.” It stuck with me. Executives are generally smart people with a lot on their plates, and they want us, as security leaders, to make ourselves understood.

I’ve been heads down with a small team, building a new kind of risk management software. It’s designed to improve executive communication. Our first customers are excited and finding that it’s changing the way they engage with their management teams. Right now, we’re looking for a few more forward-looking organizations that want to improve their security, allocate their resources better and link what they’re doing to what the business needs.

If you’re a leader at such a company, please send me an email [first]@[last].org, leave a comment or reach out via linkedin.