Category: threat modeling

Promoting Threat Modeling Work

Quick: are all the flowers the same species?

People regularly ask me to promote their threat modeling work, and I’m often happy to do so, even when I have questions about it. There are a few things I look at before I do, and I want to share some of those because I want to promote work that moves things forward, so we all benefit from it. Some of the things I look for include:

  • Specifics. If you have a new threat modeling approach, that’s great. Describe the steps concisely and crisply. (If I can’t find a list in your slide deck or paper, it’s not concise and crisp.) If you have a new variant on a building block or a new way to answer one of the four questions, be clear about that, so that those seeing your work can easily put it into context, and know what’s different. The four question framework makes this easy. For example, “this is an extension of ‘what are we working on,’ and you can use any method to answer the other questions.” Such a sentence makes it easy for those thinking of picking up your tool to put it immediately in context.
  • Names. Name your work. We don’t discuss Guido’s programming language with a strange dependence on whitespace, we discuss Python. For others to understand it, your work needs a name, not an adjective. There are at least half a dozen distinct ‘awesome’ ways to threat model being promoted today. Their promoters don’t make it easy to figure out what’s different from the many other awesome approaches. These descriptors also carry an implication that only they are awesome, and the rest, by elimination, must suck. Lastly, I don’t believe that anyone is promoting The Awesome Threat Modeling Method — if you are, I apologize, I was looking for an illustrative name that avoids calling anyone out.

    (Microsoft cast a pall over the development of threat modeling by having at least four different things labeled ‘the Microsoft approach to threat modeling.’ Those included DFD+STRIDE, Asset-entry, patterns and practices and TAM, and variations on each.) Also, we discuss Python 2 versus Python 3, not ‘the way Guido talked about Python in 2014 in that video that got taken off Youtube because it used walk-on music..’

  • Respect. Be respectful of the work others have done, and the approaches they use. Threat modeling is a very big tent, and what doesn’t work for you may well work for others. This doesn’t mean ‘never criticize,’ but it does mean don’t cast shade. It’s fine to say ‘Threat modeling an entire system at once doesn’t work in agile teams at west coast software companies.’ It’s even better to say ‘Writing misuse cases got an NPS of -50 and Elevation of Privilege scored 15 at the same 6 west coast companies founded in the last 5 years.’
    I won’t promote work that tears down other work for the sake of tearing it down, or that does so by saying either ‘this doesn’t work’ without specifics of the situation in which it didn’t work. Similarly, it’s fine to say “it took too long” if you say how long it took to do what steps, and, ideally, quantify ‘too long.’

I admit that I have failed at each of these in the past, and endeavor to do better. Specifics, labels, and respectful conversation help us understand the field of flowers.

What else should we do better as we improve the ways we tackle threat modeling?

Photo by Stephanie Krist on Unsplash.

Testing Building Blocks

There are a couple of new, short (4-page), interesting papers from a team at KU Leuven including:

What makes these interesting is that they are digging into better-formed building blocks of threat modeling, comparing them to requirements, and analyzing how they stack up.

The work is centered on threat modeling for privacy and data protection, but what they look at includes STRIDE, CAPEC and CWE. What makes this interesting is not just the results of the comparison, but that they compare and contrast between techniques (DFD variants vs CARiSMA extended; STRIDE vs CAPEC or OWASP). Comparing building blocks at a granular level allows us to ask the question “what went wrong in that threat modeling project” and tweak one part of it, rather than throwing out threat modeling, or trying to train people in an entire method.

20 Years of STRIDE: Looking Back, Looking Forward

“Today, let me contrast two 20-year-old papers on threat modeling. My first paper on this topic, “Breaking Up Is Hard to Do,” written with Bruce Schneier, analyzed smart-card security. We talked about categories of threats, threat actors, assets — all the usual stuff for a paper of that era. We took the stance that “we experts have thought hard about these problems, and would like to share our results.”

Around the same time, on April 1, 1999, Loren Kohnfelder and Praerit Garg published a paper in Microsoft’s internal “Interface” journal called “The Threats to our Products.” It was revolutionary, despite not being publicly available for over a decade. What made the Kohnfelder and Garg paper revolutionary is that it was the first to structure the process of how to find threats. It organized attacks into a model (STRIDE), and that model was intended to help people find problems, as noted…”

Read the full version of “20 Years of STRIDE: Looking Back, Looking Forward” on Dark Reading.

Spoofing in Depth

I’m quite happy to say that my next Linkedin Learning course has launched! This one is all about spoofing.

It’s titled “Threat Modeling: Spoofing in Depth.” It’s free until at least a week after RSA.

Also, I’m exploring the idea that security professionals lack a shared body of knowledge about attacks, and that an entertaining and engaging presentation of such a BoK could be a useful contribution. A way to test this is to ask how often you hear attacks discussed at a level of abstraction that’s puts the attacks into a category other than “OMG the sky is falling, patch now.” Another way to test is to watch for fluidity in moving from one type of spoofing attack to another.

Part of my goal of the course is to help people see that attacks cluster and have similarities, and that STRIDE can act as a framework for chunking knowledge.

What Should Training Cover?

Chris Eng said “Someone should set up a GoFundMe to send whoever wrote the hit piece on password managers to a threat modeling class.

And while it’s pretty amusing, you know, I teach threat modeling classes. I spend a lot of time crafting explicit learning goals, considering and refining instructional methods, and so when a smart fellow like Chris says this, my question is why?

Is this “threat modeling as our only hope?” That’s when we take a hard security problem and sagely say “better threat modeling.” Then we wander off. It’s even better with hindsight.

Or is there a particular thing that a student should be learning in a threat modeling class? There was a set of flaws where master passwords were accessible in memory, and thus an attacker with a debugger could get your master password and decrypt all your passwords.

I’m not going to link the hit piece because they deserve to not have your clicks, impressions, or ad displays. It asserted that these flaws mean that a password manager is no better than a text file full of your passwords.

Chris’ point is that we should not tell people that using a password manager is bad, and I agree. It’s an essential part of defending against your passwords being leaked by a third party site. An attacker who can read memory can read memory, which includes backing stores like disk; in fact, reading disk is easier than reading RAM.

So to loop this around to threat modeling, we can consider a bunch of skills or knowledge that could be delivered via training:

  1. Enumerate attacker capabilities. “An attacker who can run code as Alice can do everything Alice’s account can do.” (I am, somewhat famously, not a fan of “think like an attacker”, and while I remain skeptical of enumerating attacker motivations, this is about attacker capabilities.)
  2. Understand how attacks like spoofing can take place. Details like password stuffing and how modern brute force attacks take place are a set of facts that a student could learn.
  3. Perform multiple analyses, and compare the result. If “what can go wrong” is “someone accesses your passwords by X or Y,” what are the steps to do that? What part of the defenses are in common? Which are unique? This is a set of tasks that someone could learn.

I structure classes around the four-question frame of “what are we working on, what can go wrong, what are we going to do, did we do a good job.” I work to build up skills in each of those, show how they interact, and how they interact with other engineering work. I think asking ‘what else could that attacker do with that access’ is an interesting sub of question 2. How attacks work and a selection of real world attacks is something I’ve done for non-security audiences (it feels like review for security folks). The third, comparing between models, I don’t feel is a basic skill.

I’m curious: are there other ways in which a threat modeling class could or should help its students see that ‘password managers are no better than text files’ is bad threat modeling?

Image (model) from Flinders University, Key elements and relationships in curriculum

Nature and Nurture in Threat Modeling

Josh Corman opened a bit of a can of worms a day or two ago, asking on Twitter: “pls RT: who are the 3-5 best, most natural Threat Modeling minds? Esp for NonSecurity people. @adamshostack is a given.” (Thanks!)

What I normally say to this is I don’t think I’m naturally good at finding replay attacks in network protocols — my farming ancestors got no chance to exercise such talents, and so it’s a skill I acquired. Similarly, whatever leads me to be able to spot such problems doesn’t help me spot lions on the savannah or detect food that’s slightly off.

If we’re going to scale threat modeling, to be systematic and structured, we need to work from a body of knowledge that we can teach and test. We need structures like my four-question framework (what are we working on, what can go wrong, what do we do, did we do a good job), and we need structures like STRIDE and Kill Chains to help us be systematic in our approaches to discovering what can go wrong. Part of the reason the framework works is it allows us to have many ways to threat model, instead of “the one true way.”

But that’s not a sufficient answer: from Rembrandt to Da Vinci, artists of great talent appear from nowhere. And they were identified and taught. The existence of schools, with curricula and codification of knowledge is important.

Even with brilliant artists (and I have no idea how to identify them consistently), we need more people to paint walls than we need people to paint murals. We need to scale the basic skills, and as we do so we’ll learn how to identify the “naturals.”

Photo: Max Pixel.

This is a really interesting post* about how many simple solutions to border security fail in the real world.

  • Not everywhere has the infrastructure necessary to upload large datasets to the cloud
  • Most cloud providers are in not-great jurisdictions for some threat models.
  • Lying to border authorities, even by omission, ends badly.

Fact is, the majority of “but why don’t you just…” solutions in this space either require lying, reliance on infrastructure that may be non-existent or jurisdictionally compromised, or fails openly.

The “post” was originally a long Twitter thread, which is archived, for the moment, at ThreadReader App, which is a far, far better UI than Twitter.

Navigation