Jonathan Marcil’s Threat Modeling Toolkit talk

There’s a lot of threat modeling content here at AppSec Cali, and sadly, I’m only here today. Jonathan Marcil has been a guest here on Adam & friends, and today is talking about his toolkit: data flow diagrams and attack trees.

His world is very time constrained, and it’s standing room only.

  • Threat modeling is an appsec activity, understand attackers and systems
  • For security practitioners and software engineers. A tool to help clarify what the system is for reviewers. Highlight ameliorations or requirements.
  • Help catch important things despite chaos.
  • Must be collaborative: communication is a key
  • Being wrong is great: people get engaged to correct you!
  • Data flow diagrams vs connection flow diagrams: visual overload. This is not an architectural doc, but an aid to security discussion. He suggests extending the system modeling approach to fit your needs, which is great, and is why I put my definition of a DFD3 on github; let’s treat our tools as artifacts like developers do.
  • An extended example of modeling Electrum.
  • The system model helps organize your own thoughts. Build a visual model of the things that matter to you, leave out the bits that don’t matter.
  • Found a real JSONRPC vuln in the wallet because of investigations driven by system model.
  • His models also have a “controls checklist;” “these are the controls I think we have.” Controls tied by numbers to parts of diagram. Green checklists are a great motivator.
  • Discussion of one line vs two; would another threat modeling expert be able to read this diagram? What would be a better approach for a SAML-based system? Do you need trust boundaries between the browser and the IDP? What’s going through your head as you build this?
  • Use attack trees to organize threat intelligence: roots are goals, leafs are routes to goals. If the goal is to steal cryptocurrency, one route is to gain wallet access, via stealing the physical wallet or software access. (Sorry, I’m bad at taking photos as I blog.) He shows the attack tree growing in a nice succession of slides.
  • Attack trees are useful because they’re re-usable.
  • Uses PlantUML to draw trees with code, has a bunch of advantages of version control, automatically balancing trees.
  • Questions: How to collaborate with and around threat models? How to roll out to a group of developers? How to sell them on doing something beyond a napkin.
  • Diagrams for architects versus diagrams for developers.
  • If we had an uber-tree, it wouldn’t be useful because you need to scope it and cut it. (Adam adds: perhaps scoping and cutting are easier than creating, if the tree isn’t overwhelming?)
  • Link attack tree to flow diagram; add the same numbered controls to the attack tree.
  • If you can be in a meeting and say nothing in the TM meeting, you’ve won!

Lastly, Jonathan did a great job of live-tweeting his own talk.

AppSec Cali 2018: Izar Tarandach

I’m at the OWASP AppSec Cali event, and while there’ll be video, I’m taking notes:

Context for the talk

  • What fails during the development process? Incomplete requirements, non-secure design, lack of security mindset, leaky development
  • These failures are threats which can be mitigated. (eg, compliance and risk requirements address incomplete requirements)
  • We keep failing in the same way. How often are developers required to pass a security interview to get a job?
  • Story of Alice the manager, and Bob the developer who learns about a SQL injection in their legacy code. Bob is overwhelmed by security requirements.
  • “The problem with programmers is that you can never tell what a programmer is going until it is too late.” — Seymour Cray
  • Security team objective: be informed about product flow; help developers not write and not deploy security issues; stop being a bottleneck; so focus secure development on the developer, not the security expert.

Notable Security Events

  • How to integrate security expertise into development in a more fluid way. Does this tie to “the spec”?
  • Developers don’t know that their changes are security relevant
  • Funny example of a training quiz that doesn’t have a learning goal
  • Noel Burch’s hierarchy of competence. From unconscious incompetence through unconscious competence.
  • Learning: step-by-step, instructions, theory; training: repetition, muscle memory; applying: real life doing.
  • Tie domains to notable events, use checklists for those notable events.
  • Specifically formed “if you did X, do Y.” Each “Y” must be in the language of the developer, concise, testable, and supported by training.
  • Ran an experiment, got solid feedback.
  • Short training gets used more.
  • Crisply defined responsibilities by role.

Threat Modeling Tooling from 2017

As I reflect back on 2017, I think it was a tremendously exciting year for threat modeling tooling. Some of the highlights for me include:

  • OWASP Threat Dragon is a web-based tool, much like the MS threat modeling tool, and explained in Open Source Threat Modeling, and the code is at What’s exciting is not that it’s open source, but that it’s web-driven, and that enables modern communication and collaboration in the way that’s rapidly replacing emailing documents around.
  • Tutamen is an exciting tool because it’s simplicity forced me to re-think what threat modeling tooling could be. Right now, you upload a Visio diagram, and you get back a threat list in Excel, covering OWASP, STRIDE, CWE and CAPEC. If Threat Dragon is an IDE, Tutamen is a compiler.
  • We’re seeing real action in security languages. Fraser Scott is driving an OWASP Cloud Security project to create structured stories about threats and controls. If Tutamen is a compiler, this project lets us think about different include files. (The two are not yet, and may never be, integrated.) And closely related, Continuum Security has a BDD-Security project
  • Continuum’s also doing interesting work with IriusRisk, which they describe as “a single integrated console to manage application security risk throughout the software development process.” If the tools above are about depth, IriusRisk is about helping large organizations with breadth.

Did you see anything that was exciting that I missed? Please let me know in the comments!

Vulnerabilities Equities Process and Threat Modeling

[Update: More at DarkReading, “ The Critical Difference Between Vulnerabilities Equities & Threat Equities.”]

The Vulnerabilities Equities Process (VEP) is how the US Government decides if they’ll disclose a vulnerability to the manufacturer for fixing. The process has come under a great deal of criticism, because it’s never been clear what’s being disclosed, what fraction of vulnerabilities are disclosed, if the process is working, or how anyone without a clearance is supposed to evaluate that beyond “we’re from the government, we’re here to help,” or perhaps “I know people who managed this process, they’re good folks.” Neither of those is satisfactory.

So it’s a very positive step that on Wednesday, White House Cybersecurity Coordinator Rob Joyce published “Improving and Making the Vulnerability Equities Process Transparent is the Right Thing to Do,” along with the process. Schneier says “I am less [pleased]; it looks to me like the same old policy with some new transparency measures — which I’m not sure I trust. The devil is in the details, and we don’t know the details — and it has giant loopholes.”

I have two overall questions, and an observation.

The first question is, was the published policy written when we had commitments to international leadership and being a fair dealer, or was it created or revised with an “America First” agenda?

The second question relates to there being four equities to be considered. These are the “major factors” that senior government officials are supposed to consider in exercising their judgement. But, surprisingly, there’s an “additional” consideration. (“At a high level we consider four major groups of equities: defensive equities; intelligence / law enforcement / operational equities; commercial equities; and international partnership equities. Additionally, ordinary people want to know the systems they use are resilient, safe, and sound.”) Does that imply that those officials are not required to weigh public desire for resilient and safe systems? What does it mean that the “additionally” sentence is not an equity being considered?

Lastly, the observation is that the VEP is all about vulnerabilities, not about flaws or design tradeoffs. From the charter, page 9-10:

The following will not be considered to be part of the vulnerability evaluation process:

  • Misconfiguration or poor configuration of a device that sacrifices security in lieu of availability, ease of use or operational resiliency.
  • Misuse of available device features that enables non-standard operation.
  • Misuse of engineering and configuration tools, techniques and scripts that increase/decrease functionality of the device for possible nefarious operations.
  • Stating/discovering that a device/system has no inherent security features by design.

Threat Modeling is the umbrella term for security engineering to discover and deal with these issues. It’s what I spend my days on, because I see the tremendous effort in dealing with vulnerabilities is paying off, and we see fewer of them in well-engineered systems.

In October, I wrote about the fact we’re getting better at dealing with vulnerabilities, and need to think about design issues. I closed:

In summary, we’re doing a great job at finding and squishing bugs, and that’s opening up new and exciting opportunities to think more deeply about design issues. (Emergent Design Issues)

Here, I’m going to disagree with Bruce, because I think that this disclosure shows us an important detail that we didn’t previously know. Publication exposes it, and lets us talk about it.

So, I’m going to double-down on what I wrote in October, and say that we need the VEP to expand to cover those issues. I’m not going to claim that will be easy, that the current approach will translate, or that they should have waited to handle those before publishing. One obvious place it gets harder is the sources and methods tradeoff. But we need the internet to be a resilient and trustworthy infrastructure. As Bill Gates wrote 15 years ago, we need systems that people “will always be able to rely on, [] to be available and to secure their information. Trustworthy Computing is computing that is as available, reliable and secure as electricity, water services and telephony.”

We cannot achieve that goal with the VEP being narrowly scoped. It must evolve to deal with the sorts of flaws and design tradeoffs that threat modeling helps us find.

Photo by David Clode on Unsplash.

Data Flow Diagrams 3.0

In the Brakesec podcast, I used a new analogy for why we need to name our work. When we talk about cooking, we have very specific recipes that we talk about: Julia Child’s beef bourguignon. Paul Prudhomme’s blackened fish. We hope that new cooks will follow the recipes until they get a feel for them, and that they can then start adapting and modifying them, as they generate mental models of what they’re doing.

But we talk about threat modeling we don’t label our recipes. We say this is how to threat model, as if that’s not as broad as “this is how to cook.”

And in that podcast, I realized that I’ve been guilty of definition drift in how I talk about data flow diagrams. Data flow diagrams, DFDs are also called ‘threat model diagrams’ because they’re so closely associated with threat modeling. And as I’ve used them over the course of a decade, there have been many questions:

  • Do you start with a context diagram?
  • What’s a multi-process, and when should I use one?
  • Do I really need to draw single-headed arrows? They make my diagram hard to read!
  • Is this process inside this arc? Is an arc the best way to show a trust boundary?
  • Should I color things?

Those questions I’ve initiated changes, such as showing a process as a rounded rectangle (versus a circle), eliminating rules such as all arrows are uni-directional, and advocating for trust boundaries as labeled boxes.

What I have not done is been crisp about what these changes are in a way that lets a team say “we use v3 DFDs” the way they might say “we use Python 3.” (ok, no one says either, I know!)

I’m going to retroactively label all of these changes as DFD3.0. DFD v1 was a 1970s construct. DFD2 was the critical addition of trust boundaries. And a version 3 DFD is defined as follows:

  1. It uses 5 symbols. A rectangle represents an external entity, a person or code outside your control. A rounded rectangle represents a process. They’re connected by arrows, which can be single or double headed. Data stores are represented by parallel lines. A trust boundary is a closed shape, usually a box. All lines are solid, except those used for trust boundaries, which are dashed or dotted. (There is no “multi-process” symbol in DFD3.)
  2. It must not* depend on the use of color, but can use color for additional information.
  3. All elements should have a label.
  4. You may have a context diagram if the system is complex. One is not required.

* Must, must not, should, should not are used per IETF norms.

This also allows us to talk about what might be in a DFD3.1. I know that I usually draw disks with the “drum” symbol, and I see a lot of people using that. It seems like a reasonable addition.

Using specific naming also allows us to fork. If you want to define a different type of DFD, have at it. If we have a bunch, we can figure out how keep things clear. Oh, and speaking of forking, I put this on github: DFD3.

Using specific naming allows us to talk about testing and maturity in the sense of “this is in alpha test.” “This has been used for several years, we took feedback, adjusted, and now it’s release quality.” I think that DFD3 is release quality, but it probably needs some beta testing for the definitions.

Similarly, DREAD has a bunch of problems, including a lack of definition. I use mention of DREAD as a way to see if people are threat modeling well. And one challenge there is that people silently redefine DREAD to mean something other than what it meant when Michael Howard and David LeBlanc talked about it in Writing Secure Code (2nd ed, 2003). If you want to build something new, your customers and users need to understand that it’s new, so they don’t get confused by it. Therefore, you need to give your new thing a new name. You could call it DREAD2, a DRE4D, DRECK, I don’t really care. What I care about is that it’s easily distinguished, and the first step towards that is a new name.

[Update: What’s most important is not the choices that I’ve made for what’s in DFD3, but the grouping of those choices into DFD3, so that you can make your own choices and our tools can compete in the market.

Emergent Design Issues

It seems like these days, we want to talk about everything in security as if it’s a vulnerability. For example:

German researchers have discovered security flaws that could let hackers, spies and criminals listen to private phone calls and intercept text messages on a potentially massive scale – even when cellular networks are using the most advanced encryption now available.

Experts say it’s increasingly clear that SS7, first designed in the 1980s, is riddled with serious vulnerabilities that undermine the privacy of the world’s billions of cellular customers. The flaws discovered by the German researchers are actually functions built into SS7 for other purposes – such as keeping calls connected as users speed down highways, switching from cell tower to cell tower – that hackers can repurpose for surveillance because of the lax security on the network. (“German researchers discover a flaw that could let anyone listen to your cell calls.” Washington Post, 2014).

But these are not vulnerabilities, because we can have endless debate about it they should be fixed. (Chrome exposing passwords is another example.) If they’re not vulnerabilities, what are they? Perhaps they’re flaws? One definition of flaws reads:

“Flaws are often much more subtle than simply an off-by-one error in an array reference or use of an incorrect system call,” the report notes. “A flaw might be instantiated in software code, but it is the result of a mistake or oversight at the design level.”

An example of such a flaw noted in the report is the failure to separate data and control instructions and the co-mingling of them in a string – a situation that can lead to injection vulnerabilities. (IEEE Report Reveals Top 10 Software Security Design Flaws)

In this sense, the SS7 issues are probably not “flaws” in the sense that the system behavior is unanticipated. But we don’t know. We don’t know what properties we should expect SS7 to have. For most software, the design requirements, the threat model, is not clear or explicit. Even when it’s explicit, it’s often not public. (Larry Loeb makes the same point here.)

For example, someone decided to write code to run a program on mouse over in Powerpoint, that code was tested, dialog text was written and internationalized, and so on. Someone documented it, and it’s worth pointing out that the documentation doesn’t apply to Powerpoint 2016. Was there a debate over the security of that feature when it shipped? I don’t know. When it was removed? Probably.

There’s a set of these, and I’m going to focus on how they manifest in Windows for reasons that I’ll get to. Examples include:

The reason I’m looking at these is because design questions like these emerge when a system is successful. Whatever else you want to say about it, Windows was successful and very widely deployed. As a system becomes more successful, the easily exploitable bugs are fixed, and the hard to fix design tradeoffs become relatively more important. As I wrote in “The Evolution of Secure Things:”

It’s about the constant imperfection of products, and how engineering is a response to perceived imperfections. It’s about the chaotic real world from which progress emerges. In a sense, products are never perfected, but express tradeoffs between many pressures, like manufacturing techniques, available materials, and fashion in both superficial and deep ways.

That chaotic real world exposes a set of issues that may or may not have been visible during product design. In threat modeling, identification of issues is the most crucial step. If you fail to identify issues, you will not manage those issues well. Another way to say that is: identifying issues is a necessary but not sufficient step.

The design choices listed above almost all predate threat modeling as a structured practice at Microsoft. But there have been other choices, like Windows Wifi sense or new telemetry in Windows 10. We can disagree with those design choices, but it’s clear that there were internal discussion of the right business tradeoffs. So we go back to the definition of a flaw, “a mistake or oversight at the design level.” These were not oversights. Were they design mistakes? That’s harder. The designers knew exactly what they were designing, and the software worked as planned. It was not received as planned, and it is certainly being used in unexpected ways.

There are interesting issues of composition, especially in backup authentication. That problem is being exploited in crypto currency thefts:

Mr. Perklin and other people who have investigated recent hacks said the assailants generally succeeded by delivering sob stories about an emergency that required the phone number to be moved to a new device — and by trying multiple times until a gullible agent was found.

“These guys will sit and call 600 times before they get through and get an agent on the line that’s an idiot,” Mr. Weeks said.

Coinbase, one of the most widely used Bitcoin wallets, has encouraged customers to disconnect their mobile phones from their Coinbase accounts.

One can imagine a lot of defenses, but “encouraging” customers to not use a feature may not be enough. As online wallet companies grow, they need to have threat modeled better, and perhaps that entails turning off the feature. (I don’t know their businesses well enough to simply assert an answer.)

In summary, we’re doing a great job at finding and squishing bugs, and that’s opening up new and exciting opportunities to think more deeply about design issues.

PowerPoint Screen capture via Casey Smith.

[Update Dec 13: After a conversation with Gary McGraw, I think I may have read the CSD definition of flaw too narrowly.]

Threat Modeling “App Democracy”

Direct Republican Democracy?” is a fascinating post at Prawfsblog, a collective of law professors. In it, Michael T. Morley describes a candidate for Boulder City Council with a plan to vote “the way voters tell him,” and discusses how that might not be really representative of what people want, and how it differs from (small-r) republican government. Worth a few moments of your time.

Threat Modeling and Architecture

Threat Modeling and Architecture” is the latest in a series at Infosec Insider.

After I wrote my last article on Rolling out a Threat Modeling Program, Shawn Chowdhury asked (on Linkedin) for more informatioin on involving threat modeling in the architecture process. It’s a great question, except it involves the words “threat, “modeling,” and “architecture.” And each of those words, by itself, is enough to get some people twisted around an axle.

Continue reading “Threat Modeling and Architecture”