Category: threat modeling

Safety and Security in Automated Driving

Safety First For Automated Driving” is a big, over-arching whitepaper from a dozen automotive manufacturers and suppliers.

One way to read it is that those disciplines have strongly developed safety cultures, which generally do not consider cybersecurity problems. This paper is the cybersecurity specialists making the argument that cyber will fit into safety, and how to do so.

In a sense, this white paper captures a strategic threat model. What are we working on? Autonomous vehicles. What can go wrong? Security issues of all types. What are we going to do? Integrate with and extend the existing safety discipline. Give specific threat information and mitigation strategies to component designers.

I find some parts of it surprising. (I would find it more surprising if I were to look at a 150 page document and not find anything surprising.)

Contrary to the commonly used definition of an [minimal risk condition, (MRC)], which describes only a standstill, this publication expands the definition to also include degraded operation and takeovers by the vehicle operator. Final MRCs refer to MRCs that allow complete deactivation of the automated driving system, e.g. standstill or takeover by the vehicle operator.

One of the “minimal risk” maneuvers listed (table 4) is an emergency stop. And while an emergency stop may certainly be a risk minimizing action in some circumstances, describing it as such is surprising, especially when presented in contrast to a “safe stop” maneuver.

It’s important to remember that driving is incredibly dangerous. In the United States in 2018, an estimated 40,000 people lost their lives in car crashes, and 4.5 million people were seriously injured. (I’ve seen elsewhere that a million of those are hospitalized.) A great many of those injuries are caused by either drunk or distracted drivers, and autonomous vehicles could save many lives, even if imperfect.

Which brings me to a part that I really like, which is the ‘three dimensions of risk treatment’ figure (Figure 8, shown). Words like “risk” and “risk management” encompass a lot, and this figure is a nice side contribution of the paper.

I also like Figure 27 & 28 (shown), showing risks associated with a generic architecture. Having this work available allows systems builders to consider the risks to various components they’re working on. Having it available lets us have a conversation about the systematic risks that exist, but also, allows security experts to ask “is this the right set of risks for systems builders to think about?”

A chart of system components in an autonomous vehicle

Promoting Threat Modeling Work

Quick: are all the flowers the same species?

People regularly ask me to promote their threat modeling work, and I’m often happy to do so, even when I have questions about it. There are a few things I look at before I do, and I want to share some of those because I want to promote work that moves things forward, so we all benefit from it. Some of the things I look for include:

  • Specifics. If you have a new threat modeling approach, that’s great. Describe the steps concisely and crisply. (If I can’t find a list in your slide deck or paper, it’s not concise and crisp.) If you have a new variant on a building block or a new way to answer one of the four questions, be clear about that, so that those seeing your work can easily put it into context, and know what’s different. The four question framework makes this easy. For example, “this is an extension of ‘what are we working on,’ and you can use any method to answer the other questions.” Such a sentence makes it easy for those thinking of picking up your tool to put it immediately in context.
  • Names. Name your work. We don’t discuss Guido’s programming language with a strange dependence on whitespace, we discuss Python. For others to understand it, your work needs a name, not an adjective. There are at least half a dozen distinct ‘awesome’ ways to threat model being promoted today. Their promoters don’t make it easy to figure out what’s different from the many other awesome approaches. These descriptors also carry an implication that only they are awesome, and the rest, by elimination, must suck. Lastly, I don’t believe that anyone is promoting The Awesome Threat Modeling Method — if you are, I apologize, I was looking for an illustrative name that avoids calling anyone out.

    (Microsoft cast a pall over the development of threat modeling by having at least four different things labeled ‘the Microsoft approach to threat modeling.’ Those included DFD+STRIDE, Asset-entry, patterns and practices and TAM, and variations on each.) Also, we discuss Python 2 versus Python 3, not ‘the way Guido talked about Python in 2014 in that video that got taken off Youtube because it used walk-on music..’

  • Respect. Be respectful of the work others have done, and the approaches they use. Threat modeling is a very big tent, and what doesn’t work for you may well work for others. This doesn’t mean ‘never criticize,’ but it does mean don’t cast shade. It’s fine to say ‘Threat modeling an entire system at once doesn’t work in agile teams at west coast software companies.’ It’s even better to say ‘Writing misuse cases got an NPS of -50 and Elevation of Privilege scored 15 at the same 6 west coast companies founded in the last 5 years.’
    I won’t promote work that tears down other work for the sake of tearing it down, or that does so by saying either ‘this doesn’t work’ without specifics of the situation in which it didn’t work. Similarly, it’s fine to say “it took too long” if you say how long it took to do what steps, and, ideally, quantify ‘too long.’

I admit that I have failed at each of these in the past, and endeavor to do better. Specifics, labels, and respectful conversation help us understand the field of flowers.

What else should we do better as we improve the ways we tackle threat modeling?

Photo by Stephanie Krist on Unsplash.

Testing Building Blocks

There are a couple of new, short (4-page), interesting papers from a team at KU Leuven including:

What makes these interesting is that they are digging into better-formed building blocks of threat modeling, comparing them to requirements, and analyzing how they stack up.

The work is centered on threat modeling for privacy and data protection, but what they look at includes STRIDE, CAPEC and CWE. What makes this interesting is not just the results of the comparison, but that they compare and contrast between techniques (DFD variants vs CARiSMA extended; STRIDE vs CAPEC or OWASP). Comparing building blocks at a granular level allows us to ask the question “what went wrong in that threat modeling project” and tweak one part of it, rather than throwing out threat modeling, or trying to train people in an entire method.

20 Years of STRIDE: Looking Back, Looking Forward

“Today, let me contrast two 20-year-old papers on threat modeling. My first paper on this topic, “Breaking Up Is Hard to Do,” written with Bruce Schneier, analyzed smart-card security. We talked about categories of threats, threat actors, assets — all the usual stuff for a paper of that era. We took the stance that “we experts have thought hard about these problems, and would like to share our results.”

Around the same time, on April 1, 1999, Loren Kohnfelder and Praerit Garg published a paper in Microsoft’s internal “Interface” journal called “The Threats to our Products.” It was revolutionary, despite not being publicly available for over a decade. What made the Kohnfelder and Garg paper revolutionary is that it was the first to structure the process of how to find threats. It organized attacks into a model (STRIDE), and that model was intended to help people find problems, as noted…”

Read the full version of “20 Years of STRIDE: Looking Back, Looking Forward” on Dark Reading.

Spoofing in Depth

I’m quite happy to say that my next Linkedin Learning course has launched! This one is all about spoofing.

It’s titled “Threat Modeling: Spoofing in Depth.” It’s free until at least a week after RSA.

Also, I’m exploring the idea that security professionals lack a shared body of knowledge about attacks, and that an entertaining and engaging presentation of such a BoK could be a useful contribution. A way to test this is to ask how often you hear attacks discussed at a level of abstraction that’s puts the attacks into a category other than “OMG the sky is falling, patch now.” Another way to test is to watch for fluidity in moving from one type of spoofing attack to another.

Part of my goal of the course is to help people see that attacks cluster and have similarities, and that STRIDE can act as a framework for chunking knowledge.

Navigation