Elevation of Privilege: Drawing Developers into Threat Modeling

In the holiday spirit I wanted to share an academic-style paper on the Elevation of Privilege Threat Modeling card game (EoP_Whitepaper.pdf) The paper describes the motivation, experience and lessons learned in creating the game.

As we’ve shared the game at conferences, we’ve seen people’s eyes light up at the idea of a game. We think of this as enticement, which is a great compliment to the many other reasons to get involved in secure development. As someone once said, a spoonful of sugar helps the medicine go down.

We think of Elevation of Privilege as an important demonstration that enticing people into secure development lifecycle is possible. We certainly don’t think that it’s the only game that’s possible, and so hope that sharing our experiences will help you understand the game, how to use it, and how to build on it, maybe making a game of your own to help you with challenges you face bringing secure development to your organization.

Download all of the Elevation of Privilege content here: http://www.microsoft.com/en-us/download/details.aspx?id=20303

(Originally appeared on the Microsoft SDL Blog.)

Information Security Risk: A Conversation with CSO

Earlier this month, I spoke with Derek Slater:

In early 2008, Adam Shostack and Andrew Stewart released the book The New School of Information Security. And they launched a blog in support of the book and its message.

I wondered about how Shostack perceives the state of IT risk management now, and whether he thinks progress is being made. Here are the highlights of what he told me

Information security risk: A conversation with Adam Shostack.

The Fog of Reporting on Cyberwar

There’s a fascinating set of claims in Foreign Affairs “The Fog of Cyberward“:

Our research shows that although warnings about cyberwarfare have become more severe, the actual magnitude and pace of attacks do not match popular perception. Only 20 of 124 active rivals — defined as the most conflict-prone pairs of states in the system — engaged in cyberconflict between 2001 and 2011. And there were only 95 total cyberattacks among these 20 rivals. The number of observed attacks pales in comparison to other ongoing threats: a state is 600 times more likely to be the target of a terrorist attack than a cyberattack. We used a severity score ranging from five, which is minimal damage, to one, where death occurs as a direct result from cyberwarfare. Of all 95 cyberattacks in our analysis, the highest score — that of Stuxnet and Flame — was only a three.

There’s also a pretty chart:

Cyber attacks graphic 411 0

All of which distracts from what seems to me to be a fundamental methodological question, which is “what counts as an incident”, and how did the authors count those incidents? Did they use some database? Media queries? The article seems to imply that such things are trivial, and unworthy of distracting the reader. Perhaps that’s normal for Foreign Policy, but I don’t agree.

The question of what’s being measured is important for assessing if the argument is convincing. For example, it’s widely believed that the hacking of Lockheed Martin was done by China to steal military secrets. Is that a state on state attack which is included in their data? If Lockheed Martin counts as an incident, how about the hacking of RSA as a pre-cursor? There’s a second set of questions, which relates to the known unknowns, the things we know we don’t know about. As every security practitioner knows, we sweep a lot of incidents under the rug. That’s changing somewhat as state laws have forced organizations to report breaches that impact personal information. Those laws are influencing norms in the US and elsewhere, but I see no reason to believe that all incidents are being reported. If they’re not being reported, then they can’t be in the chart.

That brings us to a third question. If we treat the chart as a minimum bar, how far is it from the actual state of affairs? Again, we have no data.

I did search for underlying data, but Brandon Valeriano’s publications page doesn’t contain anything that looks relevant, and I was unable to find such a page for Ryan Maness.

Usable Security: Timing of Information?

As I’ve read Kahneman’s “Thinking, Fast and Slow,” I’ve been thinking a lot about “what you see is all there is” and the difference between someone’s state of mind when they’re trying to decide on an action, and once they’ve selected and are executing a plan.

I think that as you’re trying to figure out how to do something, you might have a goal and a model in mind. For example, “where is that picture I just downloaded?” As you proceed along the path, you take actions which involve making a commitment to a course of action, ultimately choosing to open one file over another. Once you make that choice, you’re invested, and perhaps the endowment effect kicks in, making you less likely to be willing to change your decision because of (say) some stupid dialog box.

Another way to say that is information that’s available as you’re making a decision might be far more influential than information that comes in later. That’s a hypothesis, and I’ve been having trouble finding a study that actually tests that idea.

For example, if we use a scary button like this:

Scary button with spikes

would that work better than this:

File JPG is an application

If someone knows of a user test that might shed light on if this sort of thing matters, I’d be very grateful for a pointer.

Can Science Improvise?

My friend Raquell Holmes is doing some really interesting work at using improv to unlock creativity. There’s some really interesting ties between the use of games and the use of improv to get people to approach problems in a new light, and I’m bummed that I won’t be able to make this event:

Monday Dec 17th – 7:15 to 9:15pm
835 Market Street, Rm. 619, Downtown San Francisco State University Campus

Register at http://www.acteva.com//booking.cfm?bevaid=234451
In advance- $15 At the Door- $20

What happens when you combine the playfulness of improvisation with
the rigor of science? The Life Performance Coaching Center which
leads people from all walks of life in a performance-based approach to
human development is pleased to host Dr. Raquell M. Holmes founder of
improvscience. Holmes has been bringing the discoveries in human
development and performance to researchers and educators in many areas
of science including biology and computing sciences.

In this exploration for scientists and those interested in creativity
and development, participants are introduced to what the
improvisational arts bring to science. Learning to build with the
contributions of others and see opportunities for improvisational
conversation helps us to take risks and discover new ways of seeing
each other and our work.

Come and play as we break down the social barriers that can inhibit
creativity, exploration and discovery.

Helen Abel, LCSW, has worked with people to develop their lives for
over 30 years as a social worker, therapist and coach. She is on the
staff of the Life Performance Coaching Center where she leads the
popular Playground series {link if available} where people learn how
to use their capacity to create, perform and play. As a life coach she
helps people access these same skills to develop creative and new
kinds of conversations with their friends, family and colleagues.

Dr. Raquell Holmes is Director of Outreach, Recruitment and Retention
at the Center for Cell Analysis and Modeling at University of
Connecticut Health Center. She helps biologists to incorporate
computing and computational resources into their teaching and
research. Community building and improvisational theater are explicit
components of the majority of her National Science Foundation funded
projects. She founded improvscience to provide scientists with
opportunities to develop skills in leadership, collaboration and
innovation. Since its inception improvscience has worked with over a
thousand professionals in Science, Technology, Engineering and
Mathematics education and research.

Should I advertise on Twitter?

Apparently Twitter sent me some credits to use in their advertising program. Now, I really don’t like Twitter’s promoted tweets — I’d prefer to be the customer rather than the product. (That is, I’d like to be able to give Twitter money for an ad-free experience.)

At the same time, I’m curious to see how the advertising system works. I’d like to understand it and blog about it, but Twitter would like to maintain confidentiality around the program. They’re engaged in white-hot competition with Facebook and Google to be the new advertising platform of the future. At the same time, it’s less transparency than the exceptionally high bar that Twitter has generally aspired to.

That said with the launch of Control-Alt-Hack, my collaborators have stuff to sell and give away. (Not to mention maybe a sales bump for The New School of Information Security?) Or maybe I could promote other books that I think people should read, like “Thinking, Fast and Slow“). Does the nature of what I’m advertising change the calculus? Would advertising the giveaway make it different?

Then again, I do lots of “advertising” on Twitter already–I advertise the book, the game, blog posts, ideas I like. Does paying to bring them to more people dramatically change the equation?

Interestingly (and I think this is something that can be discussed, because it’s visible), I’m offered the chance to promote both tweets and myself.

I’d be really interested in hearing from readers about how I should take advantage of this, and if I should take advantage of it at all.

Infosec Lessons from Mario Batali's Kitchen

There was a story recently on NPR about kitchen waste, “No Simple Recipe For Weighing Food Waste At Mario Batali’s Lupa.” Now, normally, you’d think that a story on kitchen waste has nothing to do with information security, and you’d be right. But as I half listened to the story, I realized that it in fact was a story about a fellow, Andrew Shakman, and his quest to change business processes to address environmental priorities.

I also realized that I’ve heard him in meetings. Ok, it wasn’t Andrew, and the subject wasn’t food waste, but I think that makes the story all the more powerful for information security, because it’s easier to look at an apparently disconnected story, understand it, and then bring the lessons on home:

“Once we begin reducing food waste, we are spending less money on food because we’re not buying food to waste it; we’re spending less money on labor; we’re spending less money on energy to keep that food cold and heat it up; we’re spending less on waste disposal,” says Shakman.

That’s right! Managing food waste doesn’t have to be a tax, it can be a profit center, and that’s awesome. Back to the story:

Lupa’s Chef di Cuisine Cruz Goler spent a couple of months working with the system. But he ran into some problems. After the first week, some of his staff just stopped weighing the food. But Goler says he didn’t want to “break their chops about some sort of vegetable scrap that doesn’t really mean anything.” Shakman believes those scraps do mean something when they add up over time. He says it’s just a matter of making the tracking a priority, even when a restaurant is really busy. “When we get busy, we don’t stop washing our hands; when we get busy, we don’t cut corners in quality on the plate,” says Shakman.

That’s right, too! We can declare priorities, and if only our thing is declared a priority, it’ll win! What’s more, what’s a priority is a matter of executive sponsorship. The fact that the health department will be upset if you don’t wash your hands — that’s just compliance. Imperfectly plated food? Look, people are at a restaurant to eat, not admire the food, and that plate’s gonna be all smudged up in just a minute. In other words, those priorities are driven by either the customer or an external party. No argument that any internal or consulting party brings in will match those. They’re priority 1, and that’s a small set of requirements.

But for me, the most heartbreaking quote came after the chef decided not to use the system in that restaurant:

Despite the failure of LeanPath in the Lupa kitchen, Shakman is still convinced his system can save restaurants money. But he’s learned that the battle against food waste, like so many battles people fight, has to start with winning hearts and minds.

It’s true, if we just win hearts and minds, people will re-prioritize their tasks. To an extent. But perhaps the issue is that to win hearts and minds, we sometimes need to listen to the objections, and find ways to address them. For example, if onion skins aren’t even used in stock, maybe those can just be dumped on a normal day. Maybe there’s a way to modify the system to only weigh scrap on 1 day out of 7, so that the cost of the system is lessened. I talked about similar issues in security in my “Engineers Are People, Too” talk, and the Elevation of Privilege game is an example of how to make a set of threat modeling tasks more attractive.

Lastly, I want to be clear that I’m using Mr. Shakman and his company as a strawman to critique behaviors I see in information security. Mr. Shakman is probably a great guy and dedicated entreprenuer who’s been taken way out of context in that story. From the company’s website and blog they have some happy customers. I mean them no harm, think what they’re trying to do is an awesome goal, and I wish them the best of luck.

Hoff on AWS

Hoff’s blog post “Why Amazon Web Services (AWS) Is the Best Thing To Happen To Security & Why I Desperately Want It To Succeed” is great on a whole bunch of levels. If you haven’t read it, go do that.

The first thing I appreciated is that he directly confronts the possibility of his own confirmation bias.

The next thing I liked is that he’s not looking at just the technology, but the technology situated in a set of cultural assumptions.

Then he gets to the crux: “Either we learn to walk without them or simply not move forward.”

However, at the end, I get a little concerned he gets to a quote from Werner Vogels, “There’s no excuse not to use fine grained security to make your apps secure from the start.” Now, that’s a quote, embedded in a tweet, and so I’m going to feel a little safer in raising issues, because I can honestly say that I hope there’s some additional context there.

The reason to not only rely on fine grained security is that fine-grained security is hard to get right. It’s hard to conceptualize, it’s hard to implement well, and it’s hard to test. I’d love to hear more on the context (from either Hoff, Werner, or someone who was in the talk) on what else gets embedded and built to offer up defense in depth.

Me, I’d like to see evidence: do how do apps fare with different design philosophies? Let’s say group A are apps that get built with fine grained security from the start, and group B are apps built with a firewall assumption, and group C is things developed by a team that’s been using a modern security development lifecycle for more than a year. How does A do compared to each? (Pssst! Looking at you, Jeremiah!) There’s of course other comparisons we could run, but what’s important is that we look to data, rather than the opinions of well-regarded folks.

[Update: Jeremiah responded, “please better define “modern security development lifecycle” and “fine grained security” and I’ll go find out,” which I suppose puts the ball in my court. Modern SDL is relatively easy–is the organization using either the “Microsoft SDL Optimization Model” or the BSIMM to evaluate their development activities? Fine grained security is harder for me to provide a “survey question” for. Perhaps “does your app have a security kernel that performs authorization tests?” or “does your app support a policy language to control authorization activity?”

I would love your thoughts on how to make surveyable propositions about things the survey participants should know about. ]