Category: threat modeling

20 Years of STRIDE: Looking Back, Looking Forward

“Today, let me contrast two 20-year-old papers on threat modeling. My first paper on this topic, “Breaking Up Is Hard to Do,” written with Bruce Schneier, analyzed smart-card security. We talked about categories of threats, threat actors, assets — all the usual stuff for a paper of that era. We took the stance that “we experts have thought hard about these problems, and would like to share our results.”

Around the same time, on April 1, 1999, Loren Kohnfelder and Praerit Garg published a paper in Microsoft’s internal “Interface” journal called “The Threats to our Products.” It was revolutionary, despite not being publicly available for over a decade. What made the Kohnfelder and Garg paper revolutionary is that it was the first to structure the process of how to find threats. It organized attacks into a model (STRIDE), and that model was intended to help people find problems, as noted…”

Read the full version of “20 Years of STRIDE: Looking Back, Looking Forward” on Dark Reading.

Spoofing in Depth

I’m quite happy to say that my next Linkedin Learning course has launched! This one is all about spoofing.

It’s titled “Threat Modeling: Spoofing in Depth.” It’s free until at least a week after RSA.

Also, I’m exploring the idea that security professionals lack a shared body of knowledge about attacks, and that an entertaining and engaging presentation of such a BoK could be a useful contribution. A way to test this is to ask how often you hear attacks discussed at a level of abstraction that’s puts the attacks into a category other than “OMG the sky is falling, patch now.” Another way to test is to watch for fluidity in moving from one type of spoofing attack to another.

Part of my goal of the course is to help people see that attacks cluster and have similarities, and that STRIDE can act as a framework for chunking knowledge.

What Should Training Cover?

Chris Eng said “Someone should set up a GoFundMe to send whoever wrote the hit piece on password managers to a threat modeling class.

And while it’s pretty amusing, you know, I teach threat modeling classes. I spend a lot of time crafting explicit learning goals, considering and refining instructional methods, and so when a smart fellow like Chris says this, my question is why?

Is this “threat modeling as our only hope?” That’s when we take a hard security problem and sagely say “better threat modeling.” Then we wander off. It’s even better with hindsight.

Or is there a particular thing that a student should be learning in a threat modeling class? There was a set of flaws where master passwords were accessible in memory, and thus an attacker with a debugger could get your master password and decrypt all your passwords.

I’m not going to link the hit piece because they deserve to not have your clicks, impressions, or ad displays. It asserted that these flaws mean that a password manager is no better than a text file full of your passwords.

Chris’ point is that we should not tell people that using a password manager is bad, and I agree. It’s an essential part of defending against your passwords being leaked by a third party site. An attacker who can read memory can read memory, which includes backing stores like disk; in fact, reading disk is easier than reading RAM.

So to loop this around to threat modeling, we can consider a bunch of skills or knowledge that could be delivered via training:

  1. Enumerate attacker capabilities. “An attacker who can run code as Alice can do everything Alice’s account can do.” (I am, somewhat famously, not a fan of “think like an attacker”, and while I remain skeptical of enumerating attacker motivations, this is about attacker capabilities.)
  2. Understand how attacks like spoofing can take place. Details like password stuffing and how modern brute force attacks take place are a set of facts that a student could learn.
  3. Perform multiple analyses, and compare the result. If “what can go wrong” is “someone accesses your passwords by X or Y,” what are the steps to do that? What part of the defenses are in common? Which are unique? This is a set of tasks that someone could learn.

I structure classes around the four-question frame of “what are we working on, what can go wrong, what are we going to do, did we do a good job.” I work to build up skills in each of those, show how they interact, and how they interact with other engineering work. I think asking ‘what else could that attacker do with that access’ is an interesting sub of question 2. How attacks work and a selection of real world attacks is something I’ve done for non-security audiences (it feels like review for security folks). The third, comparing between models, I don’t feel is a basic skill.

I’m curious: are there other ways in which a threat modeling class could or should help its students see that ‘password managers are no better than text files’ is bad threat modeling?

Image (model) from Flinders University, Key elements and relationships in curriculum

Nature and Nurture in Threat Modeling

Josh Corman opened a bit of a can of worms a day or two ago, asking on Twitter: “pls RT: who are the 3-5 best, most natural Threat Modeling minds? Esp for NonSecurity people. @adamshostack is a given.” (Thanks!)

What I normally say to this is I don’t think I’m naturally good at finding replay attacks in network protocols — my farming ancestors got no chance to exercise such talents, and so it’s a skill I acquired. Similarly, whatever leads me to be able to spot such problems doesn’t help me spot lions on the savannah or detect food that’s slightly off.

If we’re going to scale threat modeling, to be systematic and structured, we need to work from a body of knowledge that we can teach and test. We need structures like my four-question framework (what are we working on, what can go wrong, what do we do, did we do a good job), and we need structures like STRIDE and Kill Chains to help us be systematic in our approaches to discovering what can go wrong. Part of the reason the framework works is it allows us to have many ways to threat model, instead of “the one true way.”

But that’s not a sufficient answer: from Rembrandt to Da Vinci, artists of great talent appear from nowhere. And they were identified and taught. The existence of schools, with curricula and codification of knowledge is important.

Even with brilliant artists (and I have no idea how to identify them consistently), we need more people to paint walls than we need people to paint murals. We need to scale the basic skills, and as we do so we’ll learn how to identify the “naturals.”

Photo: Max Pixel.

This is a really interesting post* about how many simple solutions to border security fail in the real world.

  • Not everywhere has the infrastructure necessary to upload large datasets to the cloud
  • Most cloud providers are in not-great jurisdictions for some threat models.
  • Lying to border authorities, even by omission, ends badly.

Fact is, the majority of “but why don’t you just…” solutions in this space either require lying, reliance on infrastructure that may be non-existent or jurisdictionally compromised, or fails openly.

The “post” was originally a long Twitter thread, which is archived, for the moment, at ThreadReader App, which is a far, far better UI than Twitter.

Threat Modeling as Code

Omer Levi Hevroni has a very interesting post exploring ways to represent threat models as code.

The closer threat modeling practices are to engineering practices already in place, the more it will be impactful, and the more it will be a standard part of delivery.

There’s interesting work in both transforming threat modeling thinking into code, and using code to reduce the amount of thinking required for a project. These are importantly different. Going from analysis to code is work, and selecting the right code to represent your project is work. Both, like writing tests, are an investment of effort now to increase productivity later.

It’s absolutely worth exploring ways to reduce the unique thinking that a project requires, and I’m glad to see this work being done.

Linkedin Learning: Producing a Video

My Linkedin Learning course is getting really strong positive feedback. Today, I want to peel back the cover a bit, and talk about how it came to be.

Before I struck a deal with Linkedin, I talked to some of the other popular training sites. Many of them will buy you a microphone and some screen recording software, and you go to town! They even “let” you edit your own videos. Those aren’t my skillsets, and I think the quality often shines through. Just not in a good way.

I had a great team at Linkedin. From conceptualizing the course and the audience, through final production, it’s been a blast. Decisions that were made were made because of what’s best for the student. Like doing a video course so we could show me drawing on a whiteboard, rather than showing fancy pictures and implying that that’s what you need to create to threat model like the instructor.

My producer Rae worked with me, and taught me how to write for video. It’s a very different form than books or blogs, and to be frank, it took effort to get me there. It took more effort to get me to warm up on camera and make good use of the teleprompter(!), and that’s an ongoing learning process for me. The team I work with there manages to be supportive, directive and push without pushing too hard. They should do a masterclass in coaching and feedback.

But the results are, I think, fantastic. The version of me that’s recorded is, in a very real way, better than I ever am. It’s the magic of Holywood 7 takes of every sentence. The team giving me feedback on how each sounded, and what to improve.

The first course is “Learning Threat Modeling for Security Professionals.”

Navigation