Threat Modeling and Risk Assessment

Yesterday, I got into a bit of a back and forth with Wendy Nather on threat modeling and the role of risk management, and I wanted to respond more fully.

So first, what was said:

(Wendy) As much as I love Elevation of Privilege, I don’t think any threat modeling is complete without considering probability too.
(me) Thanks! I’m not advocating against risk, but asking when. Do you evaluate bugs 2x? Once in threat model & once in bug triage?
(Wendy) Yes, because I see TM as being important in design, when the bugs haven’t been written in yet. 🙂

I think Wendy and I are in agreement that threat modeling should happen early, and that probability is important. My issue is that I think issues discovered by threat modeling are, in reality, dealt with by only a few of Gunnar’s top 5 influencers.

I think there are two good reasons to consider threat modeling as an activity that produces a bug list, rather than a prioritized list. First is that bugs are a great exit point for the activity, and second, bugs are going to get triaged again anyway.

First, bugs are a great end point. An important part of my perspective on threat modeling is that it works best when there’s a clear entry and exit point, that is, when developers know when the threat modeling activity is done. (Window Snyder, who knows a thing or two about threat modeling, raised this as the first thing that needed fixing when I took my job at Microsoft to improve threat modeling.) Developers are familiar with bugs. If you end a strange activity, such as threat modeling, with a familiar one, such as filing bugs, developers feel empowered to take a next step. They know what they need to do next.

And that’s my second point: developers and development organizations triage bugs. Any good development organization has a way to deal with bugs. The only two real outputs I’ve ever seen from threat modeling are bugs and threat model documents. I’ve seen bugs work far better than documents in almost every case.

So if you expect that bugs will work better then you’re left with the important question that Wendy is raising: when do you consider probability? That’s going to happen in bug triage anyway, so why bother including it in threat modeling? You might prune the list and avoid entering silly bugs. That’s a win. But if you capture your risk assessment process and expertise within threat modeling, then what happens in bug triage? Will the security expert be in the room? Do you have a process for comparing security priority to other priorities? (At Microsoft, we use security bug bars for this, and a sample is here.)

My concern, and the reason I got into a back and forth, is I suspect that putting risk assessment into threat modeling keeps organizations from ensuring that expertise is in bug triage, and that’s risky.

(As usual, these opinions are mine, and may differ from those of my employer.)

[Updated to correct editing issues.]

SOUPS Keynote & Slides

This week, the annual Symposium on Usable Privacy and Security (SOUPS) is being held on the Microsoft campus. I delivered a keynote, entitled “Engineers Are People Too:”

In “Engineers Are People, Too” Adam Shostack will address an often invisible link in the chain between research on usable security and privacy and delivering that usability: the engineer. All too often, engineers are assumed to have infinite time and skills for usability testing and iteration. They have time to read papers, adapt research ideas to the specifics of their product, and still ship cool new features. This talk will bring together lessons from enabling Microsoft’s thousands of engineers to threat modeling effectively, share some new approaches to engineering security usability, and propose new directions for research.

A fair number of people have asked for the slides, and they’re here: Engineers Are People Too.

Elevation of Privilege: the Threat Modeling Game

In my work blog: “Announcing Elevation of Privilege: The Threat Modeling Game.”

After RSA, I’ll have more to say about how it came about, how it helps you and how it helps more chaos emerge. But if you’re here, you should come get a deck at the Microsoft booth (1500 row).

News from RSA: U-Prove

In “U-Prove Minimal Disclosure availability,” Kim Cameron says:

This blog is about technology issues, problems, plans for the future, speculative possibilities, long term ideas – all things that should make any self-respecting product marketer with concrete goals and metrics run for the hills! But today, just for once, I’m going to pick up an actual Microsoft press release and lay it on you. The reason? Microsoft has just done something very special, and the fact that the announcement was a key part of the RSA Conference Keynote is itself important.

Further, Charney explained that identity solutions that provide more secure and private access to both on-site and cloud applications are key to enabling a safer, more trusted enterprise and Internet. As part of that effort, Microsoft today released a community technology preview of the U-Prove technology, which enables online providers to better protect privacy and enhance security through the minimal disclosure of information in online transactions. To encourage broad community evaluation and input, Microsoft announced it is providing core portions of the U-Prove intellectual property under the Open Specification Promise, as well as releasing open source software development kits in C# and Java editions. Charney encouraged the industry, developers and IT professionals to develop identity solutions that help protect individual privacy.

Kim then goes on to analyze the announcement, which is a heck of an important one.

Disclaimer: I work for Microsoft, and am friends with many of the people involved. I still think this is tremendously important.

Pay for your own dog food

At Microsoft, there’s a very long history of ‘eating your own dogfood’ or using the latest and greatest daily builds. Although today, people seem to use the term “self-host,” which seems evidence that they don’t do either.

Eating your own dogfood gives you a decent idea of when it starts to taste ok, which is to say, ready for customers to see in some preview form.

Apropos of which, there’s a really interesting post at the Inkling blog, “Pay for your own dog food:”

Using your own product comes with a ton of benefits, because you become your own customer. The quality of your product likely increases because you can’t ignore it’s problems. They aren’t just your customers problems. They are your problems.

We’ve gotten in the habit of actually taking out our own credit card and using it on our own account sign up page. Yes, it’s a bit silly when the credit card processing takes some money off the top. But it makes the feeling very real that you are paying for this, and now it’s an expense just like it’s going to be an expense for your clients.

On the Assimilation Process

Three years and three days ago I announced that “I’m Joining Microsoft.” While I was interviewing, my final interviewer asked me “how long do you plan to stay?” I told him that I’d make a three year commitment, but I really didn’t know. We both knew that a lot of senior industry people have trouble finding a way to be effective in Microsoft’s culture.

So I wanted to pipe up and say I’m having a heck of a lot of fun, and have found places and ways to be effective. I’m getting to develop and share things like our SDL Threat Modeling Tool, and I get to be very transparent about the drivers and decisions that shape it. I’ve got some even cooler stuff in the pipeline, which I’m hoping will be public in the next year or so. My management (which has shifted a little) is supportive of me having two external blogs.

It’s been a heck of a ride so far. Dennis Fisher asked a great question to close this Hearsay Podcast, which is what surprised me the most? I was a little surprised by the question, but I’m going to stand by my answer, which is the intensity and openness of internal debate, and how it helps shape the perception that we’re all reading from the same script. It’s because we’ve seen the debate play out, with really well-informed participants, and remember which points were effective.

I can’t wait to see what happens in the next three years.

Building Security In, Maturely

While I was running around between the Berkeley Data Breaches conference and SOURCE Boston, Gary McGraw and Brian Chess were releasing the Building Security In Maturity Model.

Lots has been said, so I’d just like to quote one little bit:

One could build a maturity model for software security theoretically (by pondering what organizations should do) or one could build a maturity model by understanding what a set of distinct organizations have already done successfully. The latter approach is both scientific and grounded in the real world, and is the one we followed.

It’s long, but an easy and worthwhile read if you’re thinking of putting together or improving your software security practice.

Incidentally, my boss also commented on our work blog “Building Security In Maturity Model on the SDL Blog.”

Understanding Users

Paul Graham has a great article in “Startups in 13 Sentences:”

Having gotten it down to 13 sentences, I asked myself which I’d choose if I could only keep one.

Understand your users. That’s the key. The essential task in a startup is to create wealth; the dimension of wealth you have most control over is how much you improve users’ lives; and the hardest part of that is knowing what to make for them. Once you know what to make, it’s mere effort to make it, and most decent hackers are capable of that.

Then in “Geeks and Anti-Geeks,” Adam Barr writes:

You notice this if you listen to the chatter before a meeting. Half the time people are talking about World of Warcraft; those are the geeks. The other half they are talking about pinot noir; those are the anti-geeks. In either case, the group then proceeds to discuss a pattern-based approach to refactoring your C# class design in order to increase cohesion and leverage mock objects to achieve high code coverage while minimizing your unit test execution time.

The reason this matters is because Microsoft has recently been pushing engineers to realize that they are not the customer, the customers are not geeks, and therefore engineers can’t design properly for our customers. What I think happens, however, is that the anti-geeks hear this and think, “They’re not talking about me; I know that those beer-swilling geeks don’t understand the customer, but I’m a cultured sort, not a geek–I’m just like our customers!” And so they go out and design software for themselves…and of course they mess it up…because our customers may not spend their spare time playing Dungeons & Dragons, but neither do they spend it tramping across the Burgess Shale.

So I don’t disagree with Mr. Barr, but I do want to expand a little. The fundamental job of the program manager is to understand the market, come up with a solution that will delight the customer, sell that vision to the team, create and drive the product to shipping to those customers. The market only matters in understanding if a product is worth building, and in helping to shape our understanding of the customer by understanding their economic context.

I don’t think I’m anything like most of my customers. Those customers are first and foremost, 35,000 or so software engineers inside of Microsoft, second, security experts helping them or reviewing their work, and third, software engineers at other vendors who build on our platform. I’m most like the second set, but they’re a distant second, and (as several of them will tell you) I have a tendency to reject their first attempt at getting a feature out of hand, because our previous tools were so expert-centric.

More importantly, I don’t need to be like our customers to delight them. I am nothing like a professional chef, but I am frequently delighted by them. What I need to do is actively listen to those customers, and fairly and effectively advocate for their attitudes and words to my team.

As I was working on this Joel Spolsky posted “How to be a program manager,” which covers some similar ideas.

SDL Threat Modeling Tool 3.1.4 ships!

On my work blog, I wrote:

We’re pleased to announce version 3.1.4 of the SDL Threat Modeling Tool. A big thanks to all our beta testers who reported issues in the forum!

In this release, we fixed many bugs, learned that we needed a little more flexibility in how we handled bug tracking systems (we’ve added an “issue type” at the bug tracking system level) and updated the template format. You can read more about the tool at the Microsoft SDL Threat Modeling Tool page, or just download 3.1.4.

Unfortunately, we have no effective mitigation for the threat of bad π jokes.

I’m really excited about this release. This is solid software that you can use to analyze all sorts of designs.

Boundary Objects and Threat Modeling

threat model dfd.jpg
Ethonomethodologists talk a lot about communities of practice. Groups of people who share some set of work that they do similarly, and where they’ll co-evolve ways of working and communicating.

When everyone is part of a given community, this works really well. When we talk aboutthink like an attacker” within a community of security practice, it works well. When we tell developers to do that, they look like a deer in the headlights. (Sorry, couldn’t resist.)

One of the tools which different communities of practice can use to communicate is a boundary object. Boundary objects include things like ISBNs. Books have ISBNs in large part to track payments. They differ from Library of Congress catalog numbers. 0321502787, HD30.2.S563 and “The New School of Information Security” all refer to the same book in different contexts.

In STRIDE/Element threat modeling, there are two accidental boundary objects. (I learned about the theory after developing the approach.) They are data flow diagrams (DFDs) and bugs. The picture is a DFD, showing the process of threat modeling, along with boundaries. The boundaries are doing double duty as trust boundaries, and bi-secting the boundary objects.

The DFD acts as a boundary object because it’s simple. It takes about 30 seconds to learn (except for trust boundaries). It looks a lot like most whiteboard diagrams. Developers can draw the diagram, and security experts can analyze it.

The second boundary object is the bug database. Everyone in software development understands bug databases. And though the practices which surround them differ pretty markedly, almost no one would ship a product without reviewing their bugs, which is why security people like putting the output of a threat modeling session into the database.

There are other possible boundaries, such as the interface between the business and the software. This is where assets come into some threat modeling approaches.

So what’s the takeaway here? If you’re watching groups of people frustratedly talk past each other — or wishing they’d be that communicative — look to see if you can find boundary objects which they can use to help organize conversation.