Introducing Cyber Portfolio Management

At RSA’17, I spoke on “Security Leadership Lessons from the Dark Side.”

Leading a security program is hard. Fortunately, we can learn a great deal from Sith lords, including Darth Vader and how he managed security strategy for the Empire. Managing a distributed portfolio is hard when rebel scum and Jedi knights interfere with your every move. But that doesn’t mean that you have to throw the CEO into a reactor core. “Better ways you will learn, mmmm?”

In the talk, I discussed how “security people are from Mars and business people are from Wheaton,” and how to overcome the communication challenges associated with that.

RSA has posted audio with slides, and you can take a listen at the link above. If you prefer the written word, I have a small ebook on Cyber Portfolio Management, a new paradigm for driving effective security programs. But I designed the talk to be the most entertaining intro to the subject.

Later this week, I’ll be sharing the first draft of that book with people who subscribe to my “Adam’s New Thing” mailing list. Adam’s New Thing is my announcement list for people who hate such things. I guarantee that you’ll get fewer than 13 messages a year.

Lastly, I want to acknowledge that at BSides San Francisco 2012, Kellman Meghu made the point that “they’re having a pretty good risk management discussion,” and that inspired the way I kicked off this talk.

What CSOs can Learn from Pete Carroll

Pete Carroll

If you listen to the security echo chamber, after an embarrassing failure like a data breach, you lose your job, right?

Let’s look at Seahawks Coach Pete Carroll, who made what the home town paper called the “Worst Play Call Ever.” With less than a minute to go in the Superbowl, and the game hanging in the balance, the Seahawks passed. It was intercepted, and…game over.

Breach Lose the Super-Bowl
Publicity News stories, letters Half of America watches the game
Headline Another data breach Worst Call Play Ever
Cost $187 per record! Tens of millions in sponsorship
Public response Guessing, not analysis Monday morning quarterbacking*
Outcome CSO loses job Pete Caroll remains employed

So what can the CSO learn from Pete Carroll?

First and foremost, have back to back winning seasons. Since you don’t have seasons, you’ll need to do something else that builds executive confidence in your decision making. (Nothing builds confidence like success.)

Second, you don’t need perfect success, you need successful prediction and follow-through. Gunnar Peterson has a great post about the security VP winning battles. As you start winning battles, you also need to predict what will happen. “My team will find 5-10 really important issues, and fixing them pre-ship will save us a mountain of technical debt and emergency events.” Pete Carroll had that—a system that worked.

Executives know that stuff happens. The best laid plans…no plan ever survives contact with the enemy. But if you routinely say things like “one vuln, and it’s over, we’re pwned!” or “a breach here could sink the company!” you lose any credibility you might have. Real execs expect problems to materialize.

Lastly, after what had to be an excruciating call, he took the conversation to next year, to building the team’s confidence, and not dwelling on the past.

What Pete Carroll has is a record of delivering what executives wanted, rather than delivering excuses, hyperbole, or jargon. Do you have that record?

* Admittedly, it started about 5 seconds after the play, and come on, how could I pass that up? (Ahem)
† I’m aware of the gotcha risk here. I wrote this the day after Sony Pictures Chairman Amy Pascal was shuffled off to a new studio.

Security Lessons from Drug Trials

When people don’t take their drugs as prescribed, it’s for very human reasons.

Typically they can’t tolerate the side effects, the cost is too high, they don’t perceive any benefit, or they’re just too much hassle.

Put these very human (and very subjective) reasons together, and they create a problem that medicine refers to as non-adherence. It’s an awkward term that describes a daunting problem: about 50% of people don’t take their drugs as prescribed, and this creates some huge downstream costs. Depending how you count it, non-adherence racks up between $100 billion and $280 billion in extra costs – largely due to a condition worsening and leading to more expensive treatments down the line.

So writes “Getting People To Take Their Medicine.” But Thomas Goetz is not simply griping about the problem, he’s presenting a study of ways to address it.

That’s important because we in information security also ask people to do things, from updating their software to trusting certain pixels and not other visually identical pixels, and they don’t do those things, also for very human reasons.

His conclusion applies almost verbatim to information security:

So we took especial interest in the researcher’s final conclusion: “It is essential that researchers stop re-inventing the poorly performing ‘wheels’ of adherence interventions.” We couldn’t agree more. It’s time to stop approaching adherence as a clinical problem, and start engaging with it as a human problem, one that happens to real people in their real lives. It’s time to find new ways to connect with people’s experiences and frustrations, and to give them new tools that might help them take what the doctor ordered.

If only information security’s prescriptions were backed by experiments as rigorous as clinical trials.

(I’ve previously shared Thomas Goetz’s work in “Fear, Information Security, and a TED Talk“)

Usable Security: History, Themes, and Challenges (Book Review)

Simson Garfinkel and Heather Lipford’s Usable Security: History, Themes, and Challenges should be on the shelf of anyone who is developing software that asks people to make decisions about computer security.

We have to ask people to make decisions because they have information that the computer doesn’t. My favorite example is the Windows “new network” dialog, which asks what sort of network you’re connecting to..work, home or coffee shop. The information is used to configure the firewall. My least favorite example is phishing, where people are asked to make decisions about technical minutiae before authenticating. Regardless, we are not going to entirely remove the need for people to make decisions about computer security. So we can either learn to gain their participation in more effective ways, or we can accept a very high failure rate. The former option is better, and this book is a substantial contribution.

It’s common for designers to throw up their hands at these challenges, saying things like “given a choice between security and dancing babies, people will choose dancing babies every time,” or “you can’t patch human stupidity.” However, in a recently published study by Google and UCSD, they found that the best sites only fooled 45% of the people who clicked through, while overall only 13% did. (There’s a good summary of that study available.) Claiming that “people will choose dancing babies 13% of the time” just doesn’t seem like a compelling argument against trying.

This slim book is a review of the academic work that’s been published, almost entirely in the last 20 years, on how people interact with information security systems. It summarizes and contextualizes the many things we’ve learned, mistakes that have been made, and does so in a readable and concise way. The book has six chapters:

  • Intro
  • A brief history
  • Major Themes in UPS Academic Research
  • Lessons Learned
  • Research Challenges
  • Conclusion/The Next Ten Years

The “Major themes” chapter is 61 or so pages, which is over half of the 108 pages of content. (The book also has 40 pages of bibliography). Major themes include authentication, email security and PKI, anti-phishing, storage, device pairing, web privacy, policy specification, mobile, social media and security administration.

The “Lessons Learned” chapter is quite solid, covering “reduce decisions,” “safe and secure defaults,” “provide users with better information, not more information,” “users require clear context to make good decisions,” “information presentation is critical” and “education works but has limits.” I have a quibble, which is Sasse’s concept of mental ‘compliance budgets’ is also important, and I wish it were given greater prominence. (My other quibble is more of a pet peeve: the term “user” where “people” would serve. Isn’t it nicer to say “people require clear context to make good decisions”?) Neither quibble should take away from my key message, which is that this is an important new book.

The slim nature of the book is, I believe, an excellent usability property. The authors present what’s been done, lessons that they feel can be taken away, and move to the next topic. This lets you the reader design, build or deploy systems which help the person behind the keyboard make the decisions they want to make. To re-iterate, anyone building software that asks people to make decisions should read the lessons contained within.

Disclaimer: I was paid to review a draft of this book, and my name is mentioned kindly in the acknowledgements. I am not being paid to write or post reviews.

[Updated to correct the sentence about the last 20 years.]

Modeling Attackers and Their Motives

There are a number of reports out recently, breathlessly presenting their analysis of one threatening group of baddies or another. You should look at the reports for facts you can use to assess your systems, such as filenames, hashes and IP addresses. Most readers should, at most, skim their analysis of the perpetrators. Read on for why.

There are a number of surface reasons that you might reject or ignore these reports. For example, these reports are funded by marketing. Even if they are, that’s not a reason to reject them. The baker does not bake bread for fun, and the business goal of marketing can give us useful information. You might reject them for their abuse of adjectives like “persistent”, “stealthy”, or “sophisticated.” (I’m tempted to just compile a wordcloud and drop it in place of writing.) No, the reason to only skim these is what the analysis does to the chance of your success. There are two self-inflicted wounds that often happen when people focus on attackers:

  • You miss attackers
  • You misunderstand what the attackers will do

You may get a vicarious thrill from knowing who might be attacking you, but that very vicarious thrill is likely to make those details available to your conscious mind, or anchor your attention on them, causing you to miss other attackers. Similarly, you might get attached to the details of how they attacked last year, and not notice how those details change.

Now, you might think that your analysis won’t fall into those traps, but let me be clear: the largest, best-funded analysis shops in the world routinely make serious and consequential mistakes about their key areas of responsibility. The CIA didn’t predict the collapse of the Soviet Union, and it didn’t predict the rise of ISIS.

If your organization believes that it’s better at intelligence analysis than the thousands of people who work in US intelligence, then please pay attention to my raised eyebrow. Maybe you should be applying that analytic awesomesauce to your core business, maybe it is your core business, or maybe you should be carefully combing through the reports and analysis to update your assessments of where these rapscallions shall strike next. Or maybe you’re over-estimating your analytic capabilities.

Let me lay it out for you: the “sophisticated” attackers are using phishing to get a foothold, then dropping malware which talks to C&C servers in various ways. The phishing has three important variants you need to protect against: links to exploit web pages, documents containing exploits, and executables disguised as documents. If you can’t reliably prevent those things, detect them when you’ve missed, and respond when you discover you’ve missed, then digging into the motivations of your attackers may not be the best use of your time.

The indicators that can help you find the successful attacks are an important value from these reports, and that’s what you should use them for. Don’t get distracted by the motivations.

Adam's Mailing List and Commitment Devices

Yesterday, I announced that I’ve set up a mailing list. You may have noticed an unusual feature to the announcement: a public commitment to it being low volume, with a defined penalty ($1,000 to charity) for each time I break the rule.

You might even be wondering why I did that.

In the New School, we study people, and their motivations. Knowing that introspection is a fine place to start, a poor place to end and an excellent source of mis-direction, I talked to several people who seem like the sort I want on my list about their experience with mail lists.

The first thing I heard was fairly unanimous: people don’t subscribe because they get spammed.
The perception is that many people who create lists like this abuse those lists. So to address that, I’m using a commitment device: a promise, made publicly in advance. By making that promise, I give myself a reason to hold back from over-mailing, and I give myself a way to constrain helping others with my list. (But not eliminate such help — perhaps that’s a bad idea, and it should be just about those new things where I’m a creator. Would love your thoughts.)

The second issue I heard is that unsubscribing tends to feel like an interpersonal statement, rather than a technical one (such as “I get too much email.) So I promise not to be offended if you unsubscribe, and I promise to be grateful if you tell me why you unsubscribe. This is why I love Twitter: I control who I listen to. It’s also why I think the “unfollow unsubscribe bug” (real or imagined) was such a good thing. It provided a socially acceptable excuse for unfollows.

Are there other factors that hold you back from signing up for a mailing list like mine? Please let me know what they are, I’d love to address them if I can.

The Worst User Experience In Computer Security?

I’d like to nominate Xfinity’s “walled garden” for the worst user experience in computer security.

For those not familiar, Xfinity has a “feature” called “Constant Guard” in which they monitor your internet for (I believe) DNS and IP connections for known botnet command and control services. When they think you have a bot, you see warnings, which are inserted into your web browsing via a MITM attack.

Recently, I was visiting family, and there was, shock of all shocks, an infected machine. So I pulled out my handy-dandy FixMeStick*, and let it do its thing. It found and removed a pile of cruft. And then I went to browse the web, and still saw the warnings that the computer was infected. This is the very definition of a wicked environment, one in which feedback makes it hard to understand what’s happening. (A concept that Jay Jacobs has explicitly tied to infosec.)

So I manually removed Java, and spent time reading the long list of programs that start at boot (via Autoruns, which Xfinity links to if you can find the link), re-installed Firefox, and did a stack of other cleaning work. (Friends at browser makers: it would be nice if there was a way to forcibly remove plugins, rather than just disable them).

As someone who’s spent a great deal of time understanding malware propagation methods, I was unable to decide if my work was effective. I was unable to determine the state of the machine, because I was getting contradictory signals.

My family (including someone who’d been a professional Windows systems administrator) had spent a lot of time trying to clean that machine and get it out of the walled garden. The tools Xfinity provided did not work. They did not clean the malware from the system. Worse, the feedback Xfinity themselves provided was unclear and ambiguous (in particular, the malware in question was never named, nor was the date of the last observation available). There was no way to ask for a new scan of the machine. That may make some narrow technical sense, given the nature of how they’re doing detection, but that does not matter. The issue here is that a person of normal skill cannot follow their advice and clean the machine. Even a person with some skill may be unable to see if their work is effective. (I spent a good hour reading through what runs at startup via Autoruns).

I understand the goals of these walled garden programs. But the folks running them need to invest in talking to the people in the gardens, and understand why they’re not getting out. There’s good reasons for those failures, and we need to study the failures and address those reasons.

Until then, I’m interested in hearing if there’s a worse user experience in computer security than being told your recently cleaned machine is still dirty.

* Disclaimer: FixMeStick was founded by friends who I met at Zero-Knowledge Systems, and I think that they refunded my order. So I may be biased.

TrustZone and Security Usability

Cem Paya has a really thought-provoking set of blog posts on “TrustZone, TEE and the delusion of security indicators” (part 1, part 2“.)

Cem makes the point that all the crypto and execution protection magic that ARM is building is limited by the question of what the human holding the phone thinks is going on. If a malicious program app fakes up the UI, then it can get stuff from the human, and abuse it. This problem was well known, and was the reason that NT 3.51 got a “secure attention sequence” when it went in for C2 certification under the old Orange Book. Sure, it lost its NIC and floppy drive, but it gained Control-Alt-Delete, which really does make your computer more secure.

But what happens when your phone or tablet has a super-limited set of physical buttons? Even assuming that the person knows they want to be talking to the right program, how do they know what program they’re talking to, and how do they know that in a reliable way?

One part of an answer comes from work by Chris Karlof on Conditioned-safe Ceremonies. The essential idea is that you apply Skinner-style conditioning so people get used to doing something that helps make them more secure.

One way we could bring this to the problem that Cem is looking at would be to require a physical action to enable Trustzone. Perhaps the ceremony should be that you shake your phone over an NFC pad. That’s detectable at the gyroscope level, and could bring up the authentic payments app. An app that wanted payments could send a message into a queue, and the queue gets read by the payments app when it comes up. (I’m assuming that there’s a shake that’s feasible for those with limited motion capabilities.)

There are probably other conditioned-safe ceremonies that the phone creator could create, but Cem is right: indicators by themselves (even if they pass the white-hot parts COGs gauntlet) will not be noticed. If solution exists, it will probably involve conditioning people to do the right thing without noticing.

“The Phoenix Project” may be uncomfortable

The Phoenix Project as an important new novel, and it’s worth reading if you work in technology. As I read it, I was awfully uncomfortable with one of the characters, John. John is the information security officer in the company, and, to be frank, John does not come off well at the start of the book.

Before I get to the details, I want to talk about Gene Kim, the lead author. Gene got his start in security, having written the first free Tripwire program. Since then, he’s done key research in control effectiveness. He also accidentally demonstrated how far the complianciness industry has to go, as the COBIT standard hasn’t been updated based on his work, nor have they attempted to replicate it or refute it. Regardless, Gene gets operational information security very deeply.

So let’s talk frankly about John. John is a shrill jerk who thinks it’s a good idea to hold up business because he sees risk. He thinks of his job as risk prevention and compliance, and damn the cost to the business.

I’ve been there. Perhaps you have too. And if you’ve been there, John is an uncomfortable archetype to watch. Perhaps John is even treated too harshly. But as I said to Gene, pride goeth before the fall, and the fall cometh before redemption.

Me, I went through a lot of learning when Zero-Knowledge Systems pivoted. We had an amazing team, great technology, influencers and supporters out the wazoo, and we didn’t deliver on the goals. I spent a lot of time wallowing in what sells in security, what value propositions motivate people to buy, and how security is often a feature, not a value proposition.

Understanding where security fits in a business proposition gives me not only understanding but even sympathy for business leaders who listen to someone claim that if only the CSO reported to the CEO, they’d have a voice. That’s backwards. If the CSO has an understanding of the business, they’ll have a voice, and won’t need to report to the CEO. Also, the CEO is not the person with cycles to mentor a CSO to that understanding.

So if you’re outraged by how John is portrayed, I want to encourage you to ask yourself, are you outraged because it’s wrong, or outraged because it hurts?

The alcoholics say, the first step is admitting you have a problem. If you’re not there, maybe the first step is to go read the Phoenix Project and see if it hurts.

Infosec Lessons from Mario Batali's Kitchen

There was a story recently on NPR about kitchen waste, “No Simple Recipe For Weighing Food Waste At Mario Batali’s Lupa.” Now, normally, you’d think that a story on kitchen waste has nothing to do with information security, and you’d be right. But as I half listened to the story, I realized that it in fact was a story about a fellow, Andrew Shakman, and his quest to change business processes to address environmental priorities.

I also realized that I’ve heard him in meetings. Ok, it wasn’t Andrew, and the subject wasn’t food waste, but I think that makes the story all the more powerful for information security, because it’s easier to look at an apparently disconnected story, understand it, and then bring the lessons on home:

“Once we begin reducing food waste, we are spending less money on food because we’re not buying food to waste it; we’re spending less money on labor; we’re spending less money on energy to keep that food cold and heat it up; we’re spending less on waste disposal,” says Shakman.

That’s right! Managing food waste doesn’t have to be a tax, it can be a profit center, and that’s awesome. Back to the story:

Lupa’s Chef di Cuisine Cruz Goler spent a couple of months working with the system. But he ran into some problems. After the first week, some of his staff just stopped weighing the food. But Goler says he didn’t want to “break their chops about some sort of vegetable scrap that doesn’t really mean anything.” Shakman believes those scraps do mean something when they add up over time. He says it’s just a matter of making the tracking a priority, even when a restaurant is really busy. “When we get busy, we don’t stop washing our hands; when we get busy, we don’t cut corners in quality on the plate,” says Shakman.

That’s right, too! We can declare priorities, and if only our thing is declared a priority, it’ll win! What’s more, what’s a priority is a matter of executive sponsorship. The fact that the health department will be upset if you don’t wash your hands — that’s just compliance. Imperfectly plated food? Look, people are at a restaurant to eat, not admire the food, and that plate’s gonna be all smudged up in just a minute. In other words, those priorities are driven by either the customer or an external party. No argument that any internal or consulting party brings in will match those. They’re priority 1, and that’s a small set of requirements.

But for me, the most heartbreaking quote came after the chef decided not to use the system in that restaurant:

Despite the failure of LeanPath in the Lupa kitchen, Shakman is still convinced his system can save restaurants money. But he’s learned that the battle against food waste, like so many battles people fight, has to start with winning hearts and minds.

It’s true, if we just win hearts and minds, people will re-prioritize their tasks. To an extent. But perhaps the issue is that to win hearts and minds, we sometimes need to listen to the objections, and find ways to address them. For example, if onion skins aren’t even used in stock, maybe those can just be dumped on a normal day. Maybe there’s a way to modify the system to only weigh scrap on 1 day out of 7, so that the cost of the system is lessened. I talked about similar issues in security in my “Engineers Are People, Too” talk, and the Elevation of Privilege game is an example of how to make a set of threat modeling tasks more attractive.

Lastly, I want to be clear that I’m using Mr. Shakman and his company as a strawman to critique behaviors I see in information security. Mr. Shakman is probably a great guy and dedicated entreprenuer who’s been taken way out of context in that story. From the company’s website and blog they have some happy customers. I mean them no harm, think what they’re trying to do is an awesome goal, and I wish them the best of luck.