Magical Approaches to Threat Modeling

I was watching a talk recently where the speaker said “STRIDE produces waaaay to many threats! What we really want is a way to quickly get the right threats!”*

He’s right and he’s wrong. There are exactly three ways to get to a short list of the most meaningful threats to a new product, system or service that you’re building. They are:

  • Magically produce the right list
  • Have experts who are so good they never even think about the wrong threat
  • Produce a list that’s too long and prune it

That’s it. (If you see a fourth, please tell me!)

Predictions are hard, especially about the future. It’s hard to know what’s going to go wrong in a system under construction, and its harder when that system changes because of your prediction.

So if we don’t want to rely on Harry Potter waving a wand, getting frustrated, and asking Hermione to create the right list, then we’re left with either trusting experts or over-listing and pruning.

Don’t get me wrong. It would be great to be able to wave a magic wand or otherwise rapidly produce the right list without feeling like you’d done too much work. And if you always produce a short list, then your short list is likely to appear to be right.

Now, you may work in an organization with enough security expertise to execute perfect threat models, but I never have, and none of my clients seem to have that abundance either. (Which may also be a Heisenproblem: no organization with that many experts needs to hire a consultant to help them, except to get all their experts aligned.)

Also I find that when I don’t use a structure, I miss threats. I’ve noticed that I have a recency bias, towards attacks I’ve seen recently, and bias towards “fun” attacks, including spoofing these days because I enjoy solving those. And so I use techniques like STRIDE per element to help structure my analysis.

It may also be that approaches other than STRIDE produce lists that have a higher concentration of interesting threats, for some definition of “interesting.” Fundamentally, there’s a set of tradeoffs you can make. Those tradeoffs include:

  • Time taken
  • Coverage
  • Skill required
  • Consistency
  • Magic pixie dust required

I’m curious, what other tradeoffs have you seen?

Whatever tradeoffs you may make, given a choice between overproduction and underproduction, you probably want to find too many threats, rather than too few. (How do you know what you’re missing?) Some of getting the right number is the skill that comes from experience, and some of it is simply the grindwork of engineering.

(* The quote is not exact, because I aim to follow Warren Buffett’s excellent advice of praise specifically, criticize generally.)

Photo: Magician, by ThaQeLa.

Threat Modeling Password Managers

There was a bit of a complex debate last week over 1Password. I think the best article may be Glenn Fleishman’s “AgileBits Isn’t Forcing 1Password Data to Live in the Cloud,” but also worth reading are Ken White’s “Who moved my cheese, 1Password?,” and “Why We Love 1Password Memberships,” by 1Password maker AgileBits. I’ve recommended 1Password in the past, and I’m not sure if I agree with Agilebits that “1Password memberships are… the best way to use 1Password.” This post isn’t intended to attack anyone, but to try to sort out what’s at play.

This is a complex situation, and you’ll be shocked, shocked to discover that I think a bit of threat modeling can help. Here’s my model of

what we’re working on:

Password manager

Let me walk you through this: There’s a password manager, which talks to a website. Those are in different trust boundaries, but for simplicity, I’m not drawing those boundaries. The two boundaries displayed are where the data and the “password manager.exe” live. Of course, this might not be an exe; it might be a .app, it might be Javascript. Regardless, that code lives somewhere, and where it lives is important. Similarly, the passwords are stored somewhere, and there’s a boundary around that.

What can go wrong?

If password storage is local, there is not a fat target at Agilebits. Even assuming they’re stored well (say, 10K iterations of PBKDF2), they’re more vulnerable if they’re stolen, and they’re easier to steal en masse [than] if they’re on your computer. (Someone might argue that you, as a home user, are less likely to detect an intruder than Agilebits. That might be true, but that’s a way to detect; the first question is how likely is an attacker to break in? They’ll succeed against you and they’ll succeed against Agilebits, and they’ll get a boatload more from breaking into Agilebits. This is not intended as a slam of Agilebits, it’s an outgrowth of ‘assume breach.’) I believe Agilebits has a simpler operation than Dropbox, and fewer skilled staff in security operations than Dropbox. The simpler operation probably means there are fewer usecases, plugins, partners, etc, and means Agilebits is more likely to notice some attacks. To me, this nets out as neutral. Fleishman promises to explain “how AgileBits’s approach to zero-knowledge encryption… may be less risky and less exposed in some ways than using Dropbox to sync vaults.” I literally don’t see his argument, perhaps it was lost in the complexity of writing a long article? [Update: see also Jeffrey Goldberg’s comment about how they encrypt the passwords. I think of what they’ve done as a very strong mitigation; with a probably reasonable assumption they haven’t bolluxed their key generation. See this 1Password Security Design white paper.]

To net it out: local storage is more secure. If your computer is compromised, your passwords are compromised with any architecture. If your computer is not compromised, and your passwords are nowhere else, then you’re safe. Not so if your passwords are somewhere else and that somewhere else is compromised.

The next issue is where’s the code? If the password manager executable is stored on your device, then to replace it, the attacker either needs to compromise your device, or to install new code on it. An attacker who can install new code on your computer wins, which is why secure updates matter so much. An attacker who can’t get new code onto your computer must compromise the password store, discussed above. When the code is not on your computer but on a website, then the ease of replacing it goes way up. There’s two modes of attack. Either you can break into one of the web server(s) and replace the .js files with new ones, or you can MITM a connection to the site and tamper with the data in transit. As an added bonus, either of those attacks scales. (I’ll assume that 1Password uses certificate pinning, but did not chase down where their JS is served.)

Netted out, getting code from a website each time you run is a substantial drop in security.

What should we do about it?

So this is where it gets tricky. There are usability advantages to having passwords everywhere. (Typing a 20 character random password from your phone into something else is painful.) In their blog post, Agilebits lists more usability and reliability wins, and those are not to be scoffed at. There are also important business advantages to subscription revenue, and not losing your passwords to a password manager going out of business is important.

Each 1Password user needs to make a decision about what the right tradeoff is for them. This is made complicated by family and team features. Can little Bobby move your retirement account tables to the cloud for you? Can a manager control where you store a team vault?

This decision is complicated by walls of text descriptions. I wish is that Agilebits would do a better job of crisply and cleanly laying out the choice that their customers can make, and the advantages and disadvantages of each. (I suggest a feature chart like this one as a good form, and the data should also be in each app as you set things up.) That’s not to say that Agilebits can’t continue to choose and recommend a default.

Does this help?

After years of working in these forms, I think it’s helpful as a way to break out these issues. I’m curious: does it help you? If not, where could it be better?

Umbrella Sharing and Threat Modeling

Shared umbrellas2 framed

A month or so ago, I wrote “Bicycling and Threat Modeling,” about new approaches to bike sharing in China. Now I want to share with you “Umbrella-sharing startup loses nearly all of its 300,000 umbrellas in a matter of weeks.”

The Shenzhen-based company was launched earlier this year with a 10 million yuan investment. The concept was similar to those that bike-sharing startups have used to (mostly) great success. Customers use an app on their smartphone to pay a 19 yuan deposit fee for an umbrella, which costs just 50 jiao for every half hour of use.

According to the South China Morning Post, company CEO Zhao Shuping said that the idea came to him after watching bike-sharing schemes take off across China, making him realize that “everything on the street can now be shared.”

I don’t know anything about the Shanghaiist, but it’s quoting a story in the South China Morning Post, which closes:

Last month, a bicycle loan company had to close after 90 per cent of its bikes were stolen.

Secure updates: A threat model

Software updates

Post-Petya there have been a number of alarming articles on insecure update practices. The essence of these stories is that tax software, mandated by the government of Ukraine, was used to distribute the first Petya, and that this can happen elsewhere. Some of these stories are a little alarmist, with claims that unnamed “other” software has also been used in this way. Sometimes the attack is easy because updates are unsigned, other times its because they’re also sent over a channel with no security.

The right answer to these stories is to fix the damned update software before people get more scared of updating. That fear will survive long after the threat is addressed. So let me tell you, [as a software publisher] how to do secure upadtes, in a nutshell.

The goals of an update system are to:

  1. Know what updates are available
  2. Install authentic updates that haven’t been tampered with
  3. Strongly tie updates to the organization whose software is being updated. (Done right, this can also enable whitelisting software.)

Let me elaborate on those requirements. First, know what updates are available — the threat here is that an attacker stores your message “Version 3.1 is the latest revision, get it here” and sends it to a target after you’ve shipped version 3.2. Second, the attacker may try to replace your update package with a new one, possibly using your keys to sign it. If you’re using TLS for channel security, your TLS keys are only as secure as your web server, which is to say, not very. You want to have a signing key that you protect.

So that’s a basic threat model, which leads to a system like this:

  1. Update messages are signed, dated, and sequenced. The code which parses them carefully verifies the signatures on both messages, checks that the date is later than the previous message and the sequence number is higher. If and only if all are true does it…
  2. Get the software package. I like doing this over torrents. Not only does that save you money and improve availability, but it protects you against the “Oh hello there Mr. Snowden” attack. Of course, sometimes a belief that torrents have the “evil bit” set leads to blockages, and so you need a fallback. [Note this originally called the belief “foolish,” but Francois politely pointed out that that was me being foolish.]
  3. Once you have the software package, you need to check that it’s signed with the same key as before.
    Better to sign the update and the update message with a key you keep offline on a machine that has no internet connectivity.

  4. Since all of the verification can be done by software, and the signing can be done with a checklist, PGP/GPG are a fine choice. It’s standard, which means people can run additional checks outside your software, it’s been analyzed heavily by cryptographers.

What’s above follows the four-question framework for threat modeling: what are we working on? (Delivering updates securely); what can go wrong? (spoofing, tampering, denial of service); what are we going to do about it? (signatures and torrents). The remaining question is “did we do a good job?” Please help us assess that! (I wrote this quickly on a Sunday morning. Are there attacks that this design misses? Defenses that should be in place?)

Threat Modeling Encrypted Databases

Adrian Colyer has an interesting summary of a recent paper, “Why your encrypted database is not secure” in his excellent “morning paper” blog.

If we can’t offer protection against active attackers, nor against persistent passive attackers who are able to simply observe enough queries and their responses, the fallback is to focus on weaker guarantees around snapshot attackers, who can only obtain a single static observation of the compromised system (e.g., an attacker that does a one-off exfiltration). Today’s paper pokes holes in the security guarantees offered in the face of snapshots attacks too.


Many recent encrypted databases make strong claims of “provable security” against snapshot attacks. The theoretical models used to support these claims are abstractions. They are not based on analyzing the actual information revealed by a compromised database system and how it can be used to infer the plaintext data.

I take away two things: first, there’s a coalescence towards a standard academic model for database security, and it turns out to be a grounded model. (In contrast to models like the random oracle in crypto.) Second, all models are wrong, and it turns out that the model of a snapshot attacker seems…not all that useful.

Bicycling and Threat Modeling

Bikeshare

The Economist reports on the rise of dockless bike sharing systems in China, along with the low tech ways that the system is getting hacked:

The dockless system is prone to abuse. Some riders hide the bikes in or near their homes to prevent others from using them. Another trick involves photographing a bike’s QR code and then scratching it off to stop others from scanning it. With the stored image, the rider can then monopolise the machine. But customers caught misbehaving can have points deducted from their accounts, making it more expensive for them to rent the bikes.

Gosh, you mean you give people access to expensive stuff and they ride off into the sunset?

Threat modeling is an umbrella for a set of practices that let an organization find these sorts of attacks early, while you have the greatest flexibility in choosing your response. There are lots of characteristics we could look for: practicality, cost-effectiveness, consistency, thoroughness, speed, et cetera, and different approaches will favor one or the other. One of those characteristics is useful integration into business.

You can look at thoroughness by comparing bikes to the BMW carshare program I discussed in “The Ultimate Stopping Machine,” I think that the surprise that ferries trigger an anti-theft mechanism is somewhat surprising, and I wouldn’t dismiss a threat modeling technique, or criticize a team too fiercely for missing it. That is, there’s nuance. I’d be more critical of a team in Seattle missing the ferry issue than I would be of a team in Boulder.)

In the case of the dockless bikes, however, I would be skeptical of a technique that missed “reserving” a bike for your ongoing use. That threat seems like an obvious one from several perspectives, including that the system is labelled “dockless,” so you have an obvious contrast with a docked system.

When you find these things early, and iterate around threats, requirements and mitigations, you find opportunities to balance and integrate security in better ways than when you have to bolt it on later. (I discuss that iteration here and here.

For these bikes, perhaps the most useful answer is not to focus on misbehavior, but to reward good behavior. The system wants bikes to be used, so reward people for leaving the bikes in a place where they’re picked up soon? (Alternately, perhaps make it expensive to check out the same bike more than N times in a row, where N is reasonably large, like 10 or 15.)

Photo by Viktor Kern.

Certificate pinning is great in stone soup

In his “ground rules” article, Mordaxus gives us the phrase “stone soup security,” where everyone brings a little bit and throws it into the pot. I always try to follow Warren Buffet’s advice, to praise specifically and criticize in general.

So I’m not going to point to a specific talk I saw recently, in which someone talked about pen testing IoT devices, and stated, repeatedly, that the devices, and device manufacturers, should implement certificate pinning. They repeatedly discussed how easy it was to add a self-signed certificate and intercept communication, and suggested that the right way to mitigate this was certificate pinning.

They were wrong.

If I own the device and can write files to it, I can not only replace the certificate, but I can change a binary to replace a ‘Jump if Equal’ to a ‘Jump if Not Equal,’ and bypass your pinning. If you want to prevent certificate replacement by the device owner, you need a trusted platform which only loads signed binaries. (The interplay of mitigations and bypasses that gets you
there is a fine exercise if you’ve never worked through it.)

When I train people to threat model, I use this diagram to talk about the interaction between threats, mitigations, and requirements:

Threats mitigations requirements

Is it a requirement that the device protect itself from the owner? If you’re threat modeling well, you can answer this question. You work through these interplaying factors. You might start from a threat of certificate replacement and work through a set of difficult to mitigate threats, and change your requirements. You might start from a requirements question of “can we afford a trusted bootloader?” and discover that the cost is too high for the expected sales price, leading to a set of threats that you choose not to address. This goes to the core of “what’s your threat model?” Does it include the device owner?

Is it a requirement that the device protect itself from the owner? This question frustrates techies: we believe that we bought it, we should have the right to tinker with it. But we should also look at the difference between the iPhone and a PC. The iPhone is more secure. I can restore it to a reasonable state easily. That is a function of the device protecting itself from its owner. And it frustrates me that there’s a Control Center button to lock orientation, but not one to turn location on or off. But I no longer jailbreak to address that. In contrast, a PC that’s been infected with malware is hard to clean to a demonstrably good state.

Is it a requirement that the device protect itself from the owner? It’s a yes or no question. Saying yes has impact on the physical cost of goods. You need a more expensive sophisticated boot loader. You have to do a bunch of engineering work which is both straightforward and exacting. If you don’t have a requirement to protect the device from its owner, then you don’t need to pin the certificate. You can take the money you’d spend on protecting it from its owner, and spend that money on other features.

Is it a requirement that the device protect itself from the owner? Engineering teams deserve a crisp answer to this question. Without a crisp answer, security risks running them around in circles. (That crisp answer might be, “we’re building towards it in version 3.”)

Is it a requirement that the device protect itself from the owner? Sometimes when I deliver training, I’m asked if we can fudge, or otherwise avoid answering. My answer is that if security folks want to own security decisions, they must own the hard ones. Kicking them back, not making tradeoffs, not balancing with other engineering needs, all of these reduce leadership, influence, and eventually, responsibility.

Is it a requirement that the device protect itself from the owner?

Well, is it?

A Privacy Threat Model for The People of Seattle

Some of us in the Seattle Privacy Coalition have been talking about creating a model of a day in the life of a citizen or resident in Seattle, and the way data is collected and used; that is the potential threats to their privacy. In a typical approach, we focus on a system that we’re building, analyzing or testing. In this model, I think we need to focus on the people, the ‘data subjects.’

I also want to get away from the one by one issues, and help us look at the problems we face more holistically.

Feds Sue Seattle over FBI Surveillance

The general approach I use to threat model is based on 4 questions:

  1. What are you working on? (building, deploying, breaking, etc)
  2. What can go wrong?
  3. What are you going to do about it?
  4. Did you do a good job?

I think that we can address the first by building a model of a day, and driving into specifics in each area. For example, get up, check the internet, go to work (by bus, by car, by bike, walking), have a meal out…

One question that we’ll probably have to work on is how to address what can go wrong in a model this general? Usually I threat model specific systems or technologies where the answers are more crisp. Perhaps a way to break it out would be:

  1. What is a Seattlite’s day?
  2. What data is collected, how, and by whom? What models can we create to help us understand? Is there a good balance between specificity and generality?
  3. What can go wrong? (There are interesting variations in the answer based on who the data is about)
  4. What could we do about it? (The answers here vary based on who’s collecting the data.)
  5. Did we do a good job?

My main goal is to come away from the exercise with a useful model of the privacy threats to Seattleites. If we can, I’d also like to understand how well this “flipped” approach works.

[As I’ve discussed this, there’s a lot of interest in what comes out and what it means, but I don’t expect that to be the main focus of discussion on Saturday. For example,] There are also policy questions like, “as the city takes action to collect data, how does that interact with its official goal to be a welcoming city?” I suspect that the answer is ‘not very well,’ and that there’s an opportunity for collaboration here across the political spectrum. Those who want to run a ‘welcoming city’ and those who distrust government data collection can all ask how Seattle’s new privacy program will help us.

In any event, a bunch of us will be getting together at the Delridge Library this Saturday, May 13, at 1PM to discuss for about 2 hours, and anyone interested is welcome to join us. We’ll just need two forms of ID and your consent to our outrageous terms of service. (Just kidding. We do not check ID, and I simply ask that you show up with a goal of respectful collaboration, and a belief that everyone else is there with the same good intent.)