Threat Modeling “App Democracy”

Direct Republican Democracy?” is a fascinating post at Prawfsblog, a collective of law professors. In it, Michael T. Morley describes a candidate for Boulder City Council with a plan to vote “the way voters tell him,” and discusses how that might not be really representative of what people want, and how it differs from (small-r) republican government. Worth a few moments of your time.

Threat Modeling and Architecture

Threat Modeling and Architecture” is the latest in a series at Infosec Insider.

After I wrote my last article on Rolling out a Threat Modeling Program, Shawn Chowdhury asked (on Linkedin) for more informatioin on involving threat modeling in the architecture process. It’s a great question, except it involves the words “threat, “modeling,” and “architecture.” And each of those words, by itself, is enough to get some people twisted around an axle.

Continue reading “Threat Modeling and Architecture”

Open for Business

Recently, I was talking to a friend who wasn’t aware that I’m consulting, and so I wanted to share a bit about my new life, consulting!

I’m consulting for companies of all sizes and in many sectors. The services I’m providing include threat modeling training, engineering and strategy work, often around risk analysis or product management.

Some of the projects I’ve completed recently include:

  • Threat modeling training – Engineers learn how to threat model, and how to make threat modeling part of their delivery. Classes range from 1 to 5 days, and are customized to your needs.
  • Process re-engineering for a bank – Rebuilt their approach to a class of risks, increasing security, consistently and productively across the org.
  • Feature analysis for a security company – Identifying market need, what features fit those needs, and created a compelling and grounded story to bring the team together.

If you have needs like these, or other issues where you think my skills and experience could help, I’d love to hear from you. And if you know someone who might, I’m happy to talk to them.

I have a to-the-point website at associates.shostack.org and some details of my threat modeling services are at associates.shostack.org/threatmodeling.

Star Wars, Star Trek and Getting Root on a Star Ship

It’s time for some Friday Star Wars blogging!

Reverend Robert Ballecer, SJ tweeted: “as a child I learned a few switches & 4 numbers gives you remote code ex on a 23rd century starship.” I responded, asking “When attackers are on the bridge and can flip switches, how long a password do you think is appropriate?”

It went from there, but I’d like to take this opportunity to propose a partial threat model for 23rd century starships.

First, a few assumptions:

  • Sometimes, officers and crewmembers of starships die, are taken prisoner, or are otherwise unable to complete their duties.
  • It is important that the crew can control the spaceship, including software and computer hardware.
  • Unrestricted physical access to the bridge means you control the ship (with possible special cases, and of course, the Holodeck because lord forgive me, they need to shoot a show every week. Scalzi managed to get a surprisingly large amount from this line of inquiry in Red Shirts. But I digress.)

I’ll also go so far as to say that as a derivative of the assumptions, the crew may need a rapid way to assign “Captain” privileges to someone else, and starship designers should be careful to design for that use case.

So the competing threats here are denial of service (and possibly denial of future service) and elevation of privilege. There’s a tension between designing for availability (anyone on the bridge can assume command relatively easily) and proper authorization. My take was that the attackers on the bridge are already close to winning, and so defenses which impede replacing command authority are a mistake.

Now, in responding, I thought that “flipping switches” meant physically being there, because I don’t recall the episode that he’s discussing. But further in further conversation, what became clear is that the switches can be flipped remotely, which dramatically alters the need for a defense.

It’s not clear what non-dramatic requirement such remote switch flipping serves, and so on balance, it’s easy to declare that the added risk is high and we should not have remote switch flipping. It is always easy to declare that the risk is high, but here I have the advantage that there’s no real product designer in the room arguing for the feature. If there was, we would clarify the requirement, and then probably engineer some appropriate defenses, such as exponential backoff for remote connections. Of course, in the future with layers of virtualization, what a remote connection is may be tricky to determine in software.

Which brings me to another tweet, by Hongyi Hu, who said he was “disappointed that they still use passwords for authentication in the 23rd century. I hope the long tail isn’t that long! 😛” What can I say but, “we’ll always have passwords.” We’ll just use them for less.

As I’ve discussed, the reason I use Star Wars over Star Trek in my teaching and examples is that no one is confused about the story in the core movies. I made precisely this mistake.

Image: The Spaceship Discovery, rendered by Trekkie5000. Alert readers will recall issues that could have been discovered with better threat modeling.

Organizing threat modeling magic

I was inspired to develop and share my thoughts after Adam’s previous post (magical approaches to threat modeling) regarding selection of the threats and predictions. Since a 140 characters limit quickly annoys me, Adam gave me an opportunity to contribute on his blog, thanks to him I can now explain how I believe in magic during threat modeling.

I have noticed that most of what I do, because it is timeboxed due to carbon based lifeforms constraints, needs to be a finite choice selection from what appears to me as an infinite array of possibilities. I also enjoy pulling computer related magic tricks, or guesses, because it’s amusing and more engaging than reading a checklist. Magic, in this case, is either pure luck or based on some skills the spectators can’t see. I like when I think I’m having both.

During the selection phase of what to do, there’s a few tradeoffs that have been proposed such as coverage, time and skills required. Those are attack based and come from the knowledge of what an attacker can do. While I think that those effectively describe the selection of granular technical efforts, I prefer to look at what are his motivation rather than the constrains he’ll face. And for all that, I have a way or organizing it and showing it.

 

Attack Tree

When I think about the actual threats of a system, I don’t see a list, but rather a tree. That tree has the ultimate goals on top, and then descend into sub-goals that breaks down how you get there. It finally ends up in leaves that are the vulnerabilities to be exploited.

Here’s an unfinished example for an unnamed application:

A fun thing to do with a tree is to apply a weight on a branch. In this case the number represent attacker made tradeoffs and is totally arbitrary.

If you keep it relatively consistent to itself, you end up with an appropriate weighting system. For this example, let’s say it’s the amount of efforts you estimate it takes. You can sum the branches in the tree and get sub-goals weight without having to think about them.

And from that we can get a sum for the root goals:

But then how do I choose to prioritize or just work on something?

I could just say, well I’m going to do the easiest things to do, maybe because finding an SQLi in the application is easier than finding a slow API request, so better start looking at that first.

But regarding to decision, I often decide to do the most common human behavior: just don’t do it myself.

With the help of the tree, I just let the actual business reality do the selection on which root goals to pick. By that I mean the literal definition of reality, although nowadays people seems to forget what it really means:

“reality · noun · 1. the world or the state of things as they actually exist, as opposed to an idealistic or notional idea of them.”
– Google Almighty

I never ask the business line if they think they’ll have SQLi, but rather, if they worry more about denial of service or information stealing.

One advantage of that, is that those decisions are at the root goals. The tree is a hierarchy; the higher level you are, the bigger impact you’ll have. Like spinning a big cog wheel versus a smaller one:

3 gears

If you were to pick on each vulnerability at the time, you’ll spin your working wheel a lot, while just really advancing the root goal a bit. Work on doing the selection on the root goals, then you’ll see that it’s impact is far greater for about the same amount of time. That’s efficiency to me.

And that’s how I turn magic into engineering 😀

Of course, in order for it to be proper engineering, the next step would be to QA it. And at that point, you can fetch all the checklists or threats repository you can find, and verify that you covered everything in your tree. Simply add what you have missed, and then bask in the glory of perceived completeness.

 

For the curious practitioners, I’ve used PlantUML in order to generate the tree examples as seen above. The tool let you textually define the tree using simple markup and auto balance it for you when you are updating it. A more detailed example can be found on my Threat Modeling Toolkit presentation.

Magical Approaches to Threat Modeling

I was watching a talk recently where the speaker said “STRIDE produces waaaay to many threats! What we really want is a way to quickly get the right threats!”*

He’s right and he’s wrong. There are exactly three ways to get to a short list of the most meaningful threats to a new product, system or service that you’re building. They are:

  • Magically produce the right list
  • Have experts who are so good they never even think about the wrong threat
  • Produce a list that’s too long and prune it

That’s it. (If you see a fourth, please tell me!)

Predictions are hard, especially about the future. It’s hard to know what’s going to go wrong in a system under construction, and its harder when that system changes because of your prediction.

So if we don’t want to rely on Harry Potter waving a wand, getting frustrated, and asking Hermione to create the right list, then we’re left with either trusting experts or over-listing and pruning.

Don’t get me wrong. It would be great to be able to wave a magic wand or otherwise rapidly produce the right list without feeling like you’d done too much work. And if you always produce a short list, then your short list is likely to appear to be right.

Now, you may work in an organization with enough security expertise to execute perfect threat models, but I never have, and none of my clients seem to have that abundance either. (Which may also be a Heisenproblem: no organization with that many experts needs to hire a consultant to help them, except to get all their experts aligned.)

Also I find that when I don’t use a structure, I miss threats. I’ve noticed that I have a recency bias, towards attacks I’ve seen recently, and bias towards “fun” attacks, including spoofing these days because I enjoy solving those. And so I use techniques like STRIDE per element to help structure my analysis.

It may also be that approaches other than STRIDE produce lists that have a higher concentration of interesting threats, for some definition of “interesting.” Fundamentally, there’s a set of tradeoffs you can make. Those tradeoffs include:

  • Time taken
  • Coverage
  • Skill required
  • Consistency
  • Magic pixie dust required

I’m curious, what other tradeoffs have you seen?

Whatever tradeoffs you may make, given a choice between overproduction and underproduction, you probably want to find too many threats, rather than too few. (How do you know what you’re missing?) Some of getting the right number is the skill that comes from experience, and some of it is simply the grindwork of engineering.

(* The quote is not exact, because I aim to follow Warren Buffett’s excellent advice of praise specifically, criticize generally.)

Photo: Magician, by ThaQeLa.

Threat Modeling Password Managers

There was a bit of a complex debate last week over 1Password. I think the best article may be Glenn Fleishman’s “AgileBits Isn’t Forcing 1Password Data to Live in the Cloud,” but also worth reading are Ken White’s “Who moved my cheese, 1Password?,” and “Why We Love 1Password Memberships,” by 1Password maker AgileBits. I’ve recommended 1Password in the past, and I’m not sure if I agree with Agilebits that “1Password memberships are… the best way to use 1Password.” This post isn’t intended to attack anyone, but to try to sort out what’s at play.

This is a complex situation, and you’ll be shocked, shocked to discover that I think a bit of threat modeling can help. Here’s my model of

what we’re working on:

Password manager

Let me walk you through this: There’s a password manager, which talks to a website. Those are in different trust boundaries, but for simplicity, I’m not drawing those boundaries. The two boundaries displayed are where the data and the “password manager.exe” live. Of course, this might not be an exe; it might be a .app, it might be Javascript. Regardless, that code lives somewhere, and where it lives is important. Similarly, the passwords are stored somewhere, and there’s a boundary around that.

What can go wrong?

If password storage is local, there is not a fat target at Agilebits. Even assuming they’re stored well (say, 10K iterations of PBKDF2), they’re more vulnerable if they’re stolen, and they’re easier to steal en masse [than] if they’re on your computer. (Someone might argue that you, as a home user, are less likely to detect an intruder than Agilebits. That might be true, but that’s a way to detect; the first question is how likely is an attacker to break in? They’ll succeed against you and they’ll succeed against Agilebits, and they’ll get a boatload more from breaking into Agilebits. This is not intended as a slam of Agilebits, it’s an outgrowth of ‘assume breach.’) I believe Agilebits has a simpler operation than Dropbox, and fewer skilled staff in security operations than Dropbox. The simpler operation probably means there are fewer usecases, plugins, partners, etc, and means Agilebits is more likely to notice some attacks. To me, this nets out as neutral. Fleishman promises to explain “how AgileBits’s approach to zero-knowledge encryption… may be less risky and less exposed in some ways than using Dropbox to sync vaults.” I literally don’t see his argument, perhaps it was lost in the complexity of writing a long article? [Update: see also Jeffrey Goldberg’s comment about how they encrypt the passwords. I think of what they’ve done as a very strong mitigation; with a probably reasonable assumption they haven’t bolluxed their key generation. See this 1Password Security Design white paper.]

To net it out: local storage is more secure. If your computer is compromised, your passwords are compromised with any architecture. If your computer is not compromised, and your passwords are nowhere else, then you’re safe. Not so if your passwords are somewhere else and that somewhere else is compromised.

The next issue is where’s the code? If the password manager executable is stored on your device, then to replace it, the attacker either needs to compromise your device, or to install new code on it. An attacker who can install new code on your computer wins, which is why secure updates matter so much. An attacker who can’t get new code onto your computer must compromise the password store, discussed above. When the code is not on your computer but on a website, then the ease of replacing it goes way up. There’s two modes of attack. Either you can break into one of the web server(s) and replace the .js files with new ones, or you can MITM a connection to the site and tamper with the data in transit. As an added bonus, either of those attacks scales. (I’ll assume that 1Password uses certificate pinning, but did not chase down where their JS is served.)

Netted out, getting code from a website each time you run is a substantial drop in security.

What should we do about it?

So this is where it gets tricky. There are usability advantages to having passwords everywhere. (Typing a 20 character random password from your phone into something else is painful.) In their blog post, Agilebits lists more usability and reliability wins, and those are not to be scoffed at. There are also important business advantages to subscription revenue, and not losing your passwords to a password manager going out of business is important.

Each 1Password user needs to make a decision about what the right tradeoff is for them. This is made complicated by family and team features. Can little Bobby move your retirement account tables to the cloud for you? Can a manager control where you store a team vault?

This decision is complicated by walls of text descriptions. I wish is that Agilebits would do a better job of crisply and cleanly laying out the choice that their customers can make, and the advantages and disadvantages of each. (I suggest a feature chart like this one as a good form, and the data should also be in each app as you set things up.) That’s not to say that Agilebits can’t continue to choose and recommend a default.

Does this help?

After years of working in these forms, I think it’s helpful as a way to break out these issues. I’m curious: does it help you? If not, where could it be better?

Umbrella Sharing and Threat Modeling

Shared umbrellas2 framed

A month or so ago, I wrote “Bicycling and Threat Modeling,” about new approaches to bike sharing in China. Now I want to share with you “Umbrella-sharing startup loses nearly all of its 300,000 umbrellas in a matter of weeks.”

The Shenzhen-based company was launched earlier this year with a 10 million yuan investment. The concept was similar to those that bike-sharing startups have used to (mostly) great success. Customers use an app on their smartphone to pay a 19 yuan deposit fee for an umbrella, which costs just 50 jiao for every half hour of use.

According to the South China Morning Post, company CEO Zhao Shuping said that the idea came to him after watching bike-sharing schemes take off across China, making him realize that “everything on the street can now be shared.”

I don’t know anything about the Shanghaiist, but it’s quoting a story in the South China Morning Post, which closes:

Last month, a bicycle loan company had to close after 90 per cent of its bikes were stolen.