I have a new article up at MISTI, “Threat Modeling: What, Why, and How?.” If you’ve read my other works on threat modeling, the part at the start about threat modeling versus threat intelligence is the part that will probably be most useful.
The Economist reports on the rise of dockless bike sharing systems in China, along with the low tech ways that the system is getting hacked:
The dockless system is prone to abuse. Some riders hide the bikes in or near their homes to prevent others from using them. Another trick involves photographing a bike’s QR code and then scratching it off to stop others from scanning it. With the stored image, the rider can then monopolise the machine. But customers caught misbehaving can have points deducted from their accounts, making it more expensive for them to rent the bikes.
Gosh, you mean you give people access to expensive stuff and they ride off into the sunset?
Threat modeling is an umbrella for a set of practices that let an organization find these sorts of attacks early, while you have the greatest flexibility in choosing your response. There are lots of characteristics we could look for: practicality, cost-effectiveness, consistency, thoroughness, speed, et cetera, and different approaches will favor one or the other. One of those characteristics is useful integration into business.
You can look at thoroughness by comparing bikes to the BMW carshare program I discussed in “The Ultimate Stopping Machine,” I think that the surprise that ferries trigger an anti-theft mechanism is somewhat surprising, and I wouldn’t dismiss a threat modeling technique, or criticize a team too fiercely for missing it. That is, there’s nuance. I’d be more critical of a team in Seattle missing the ferry issue than I would be of a team in Boulder.)
In the case of the dockless bikes, however, I would be skeptical of a technique that missed “reserving” a bike for your ongoing use. That threat seems like an obvious one from several perspectives, including that the system is labelled “dockless,” so you have an obvious contrast with a docked system.
When you find these things early, and iterate around threats, requirements and mitigations, you find opportunities to balance and integrate security in better ways than when you have to bolt it on later. (I discuss that iteration here and here.
For these bikes, perhaps the most useful answer is not to focus on misbehavior, but to reward good behavior. The system wants bikes to be used, so reward people for leaving the bikes in a place where they’re picked up soon? (Alternately, perhaps make it expensive to check out the same bike more than N times in a row, where N is reasonably large, like 10 or 15.)
Photo by Viktor Kern.
In his “ground rules” article, Mordaxus gives us the phrase “stone soup security,” where everyone brings a little bit and throws it into the pot. I always try to follow Warren Buffet’s advice, to praise specifically and criticize in general.
So I’m not going to point to a specific talk I saw recently, in which someone talked about pen testing IoT devices, and stated, repeatedly, that the devices, and device manufacturers, should implement certificate pinning. They repeatedly discussed how easy it was to add a self-signed certificate and intercept communication, and suggested that the right way to mitigate this was certificate pinning.
They were wrong.
If I own the device and can write files to it, I can not only replace the certificate, but I can change a binary to replace a ‘Jump if Equal’ to a ‘Jump if Not Equal,’ and bypass your pinning. If you want to prevent certificate replacement by the device owner, you need a trusted platform which only loads signed binaries. (The interplay of mitigations and bypasses that gets you
there is a fine exercise if you’ve never worked through it.)
When I train people to threat model, I use this diagram to talk about the interaction between threats, mitigations, and requirements:
Is it a requirement that the device protect itself from the owner? If you’re threat modeling well, you can answer this question. You work through these interplaying factors. You might start from a threat of certificate replacement and work through a set of difficult to mitigate threats, and change your requirements. You might start from a requirements question of “can we afford a trusted bootloader?” and discover that the cost is too high for the expected sales price, leading to a set of threats that you choose not to address. This goes to the core of “what’s your threat model?” Does it include the device owner?
Is it a requirement that the device protect itself from the owner? This question frustrates techies: we believe that we bought it, we should have the right to tinker with it. But we should also look at the difference between the iPhone and a PC. The iPhone is more secure. I can restore it to a reasonable state easily. That is a function of the device protecting itself from its owner. And it frustrates me that there’s a Control Center button to lock orientation, but not one to turn location on or off. But I no longer jailbreak to address that. In contrast, a PC that’s been infected with malware is hard to clean to a demonstrably good state.
Is it a requirement that the device protect itself from the owner? It’s a yes or no question. Saying yes has impact on the physical cost of goods. You need a more
expensive sophisticated boot loader. You have to do a bunch of engineering work which is both straightforward and exacting. If you don’t have a requirement to protect the device from its owner, then you don’t need to pin the certificate. You can take the money you’d spend on protecting it from its owner, and spend that money on other features.
Is it a requirement that the device protect itself from the owner? Engineering teams deserve a crisp answer to this question. Without a crisp answer, security risks running them around in circles. (That crisp answer might be, “we’re building towards it in version 3.”)
Is it a requirement that the device protect itself from the owner? Sometimes when I deliver training, I’m asked if we can fudge, or otherwise avoid answering. My answer is that if security folks want to own security decisions, they must own the hard ones. Kicking them back, not making tradeoffs, not balancing with other engineering needs, all of these reduce leadership, influence, and eventually, responsibility.
Is it a requirement that the device protect itself from the owner?
Well, is it?
Some of us in the Seattle Privacy Coalition have been talking about creating a model of a day in the life of a citizen or resident in Seattle, and the way data is collected and used; that is the potential threats to their privacy. In a typical approach, we focus on a system that we’re building, analyzing or testing. In this model, I think we need to focus on the people, the ‘data subjects.’
I also want to get away from the one by one issues, and help us look at the problems we face more holistically.
The general approach I use to threat model is based on 4 questions:
- What are you working on? (building, deploying, breaking, etc)
- What can go wrong?
- What are you going to do about it?
- Did you do a good job?
I think that we can address the first by building a model of a day, and driving into specifics in each area. For example, get up, check the internet, go to work (by bus, by car, by bike, walking), have a meal out…
One question that we’ll probably have to work on is how to address what can go wrong in a model this general? Usually I threat model specific systems or technologies where the answers are more crisp. Perhaps a way to break it out would be:
- What is a Seattlite’s day?
- What data is collected, how, and by whom? What models can we create to help us understand? Is there a good balance between specificity and generality?
- What can go wrong? (There are interesting variations in the answer based on who the data is about)
- What could we do about it? (The answers here vary based on who’s collecting the data.)
- Did we do a good job?
My main goal is to come away from the exercise with a useful model of the privacy threats to Seattleites. If we can, I’d also like to understand how well this “flipped” approach works.
[As I’ve discussed this, there’s a lot of interest in what comes out and what it means, but I don’t expect that to be the main focus of discussion on Saturday. For example,] There are
also policy questions like, “as the city takes action to collect data, how does that interact with its official goal to be a welcoming city?” I suspect that the answer is ‘not very well,’ and that there’s an opportunity for collaboration here across the political spectrum. Those who want to run a ‘welcoming city’ and those who distrust government data collection can all ask how Seattle’s new privacy program will help us.
In any event, a bunch of us will be getting together at the Delridge Library this Saturday, May 13, at 1PM to discuss for about 2 hours, and anyone interested is welcome to join us. We’ll just need two forms of ID and your consent to our outrageous terms of service. (Just kidding. We do not check ID, and I simply ask that you show up with a goal of respectful collaboration, and a belief that everyone else is there with the same good intent.)
IANS members should have access today to a new faculty report I wrote, entitled “Threat Modeling in An Agile World.” Because it’s May the Fourth, I thought I’d share the opening:
As Star Wars reaches its climax, an aide approaches Grand Moff Tarkin to say, “We’ve analyzed their attack pattern, and there is a danger.” In one of the final decisions he makes, Tarkin brushes aside those concerns. Likewise, in Rogue One, we hear Galen Urso proclaim that the flaw is too subtle to be found. But that’s bunk. There’s clearly no blow-out sections or baffles around the reactor: if there’s a problem, the station is toast. A first year engineering student could catch it.
You don’t have to be building a Death Star to think about what might go wrong before you complete the project. The way we do that is by “threat modeling,” an umbrella term for anticipating and changing the problems that a system might experience. Unfortunately, a lot of the advice you’ll hear about threat modeling makes it seem a little bit like the multi-year process of building a Death Star.
Threat modeling internet-enabled things is similar to threat modeling other computers, with a few special tensions that come up over and over again. You can start threat modeling IoT with the four question framework:
- What are you building?
- What can go wrong?
- What are you going to do about it?
- Did we do a good job?
But there are specifics to IoT, and those specifics influence how you think about each of those questions. I’m helping a number of companies who are shipping devices, and I would love to fully agree that “consumers shouldn’t have to care about the device’s security model. It should just be secure. End of story.” I agree with Don Bailey on the sentiment, but frequently the tensions between requirements mean that what’s secure is not obvious, or that “security” conflicts with “security.” (I model requirements as part of ‘what are you building.’)
When I train people to threat model, I use this diagram to talk about the interaction between threats, mitigations, and requirements:
The interaction has a flavor in working with internet-enabled things, and that interaction changes from device to device. There are some important commonalities.
When looking at what you’re building, IoT devices typically lack sophisticated input devices like keyboards or even buttons, and sometimes their local output is a single LED. One solution is to put a web server on the device listening, and to pay for a sticker with a unique admin password, which then drives customer support costs. Another solution is to have the device not listen but to reach out to your cloud service, and let customers register their devices to their cloud account. This has security, privacy, and COGS
downsides tradeoffs. [Update: I said downsides, but it’s more a different set of attack vectors become relevant in security. COGS is an ongoing commitment to operations; privacy is dependent on what’s sent or stored.]
When asking what can go wrong, your answers might include “a dependency has a vulnerability,” or “an attacker installs their own software.” This is an example of security being in tension with itself is the ability to patch your device yourself. If I want to be able to recompile the code for my device, or put a safe version of zlib on there, I ought to be able to do so. Except if I can update the device, so can attackers building a botnet, and 99.n% of typical consumers for a smart lightbulb are not going to patch themselves. So we get companies requiring signed updates. Then we get to the reality that most consumer devices last longer than most silicon valley companies. So we want to see a plan to release the key if the company is unable to deliver updates. And that plan runs into the realities of bankruptcy law, which is that that signing key is an asset, and its hard to value, and bankruptcy trustees are unlikely to give away your assets. There’s a decent pattern (allegedly from the world of GPU overclocking), which is that you can intentionally make your device patchable by downloading special software and moving a jumper. This requires a case that can be opened and reclosed, and a jumper or other DFU hardware input, and can be tricky on inexpensive or margin-strained devices.
That COGS (cost of goods sold) downside is not restricted to security, but has real security implications, which brings us to the question of what are you going to do about it. Consumers are not used to subscribing to their stoves, nor are farmers used to subscribing to their tractors. Generally, both have better things to do with their time than to understand the new models. But without subscription revenue, it’s very hard to make a case for ongoing security maintenance. And so security comes into conflict with consumer mental models of purchase.
In the IoT world, the question of did we do a good job becomes have we done a good enough job? Companies believe that there is a first mover advantage, and this ties to points that Ross Anderson made long ago about the tension between security and economics. Good threat modeling helps companies get to those answers faster. Sharing the tensions help us understand what the tradeoffs look like, and with those tensions, organizations can better define their requirements and get to a consistent level of security faster.
I would love to hear your experiences about other issues unique to threat modeling IoT, or where issues weigh differently because of the IoT nature of a system!
(Incidentally, this came from a question on Twitter; threading on Twitter is now worse than it was in 2010 or 2013, and since I have largely abandoned the platform, I can’t figure out who’s responding to what. A few good points I see include:
Despite the title, end users are rarely the weak link in security. We often make impossible demands of them. For example, we want them to magically know things which we do not tell them.
Today’s example: in many browsers, this site will display as “Apple.com.” Go ahead. Explore that for a minute, and see if you can find evidence that it’s not. What I see when I visit is:
When I visit the site, I see it’s a secure site. I click on the word secure, I see this:
But it’s really www.xn--80ak6aa92e.com, which is a Puncycode URL. Punycode is way to encode other languages so they display properly. That’s good. What’s not good is that there’s no way to know that those are not the letters you think they are. Xudong Zheng explains the problem, in more depth, and writes about how to address it in the short term:
A simple way to limit the damage from bugs such as this is to always use a password manager. In general, users must be very careful and pay attention to the URL when entering personal information. I hope Firefox will consider implementing a fix to this problem since this can cause serious confusion even for those who are extremely mindful of phishing.
I appreciate Xudong taking the time to suggest a fix. And I don’t think the right fix is that we can expect everyone to use a password manager.
When threat modeling, I talk about this as the interplay between threats and mitigations: threats should be mitigated and there’s a threat that any given mitigation can be bypassed. When dealing with people, there’s a simple test product security engineering can use. If you cannot write down the steps that a person must take to be secure, you have a serious problem. If you cannot write that list on a whiteboard, you have a serious problem. I’m not suggesting that there’s an easy or obvious fix to this. But I am suggesting that as long as browser makers are telling their users that looking at the URL bar is a security measure, they have to make that security measure resist attacks.
There’s a new paper from Mark Thompson and Hassan Takabi of the University of North Texas. The title captures the question:
Effectiveness Of Using Card Games To Teach Threat Modeling For Secure Web Application Developments
Gamification of classroom assignments and online tools has grown significantly in recent years. There have been a number of card games designed for teaching various cybersecurity concepts. However, effectiveness of these card games is unknown for the most part and there is no study on evaluating their effectiveness. In this paper, we evaluate effectiveness of one such game, namely the OWASP Cornucopia card game which is designed to assist software development teams identify security requirements in Agile, conventional and formal development
processes. We performed an experiment where sections of graduate students and undergraduate students in a security related course at our university were split into two groups, one of which played the Cornucopia card game, and one of which did not. Quizzes were administered both before and after the activity, and a survey was taken to measure student attitudes toward the exercise. The results show that while students found the activity useful and would like to see this activity and more similar exercises integrated into the classroom, the game was not easy to understand. We need to spend enough time to familiarize the students with the game and prepare them for the exercises using the game to get the best results.
I’m very glad to see games like Cornucopia evaluated. If we’re going to push the use of Cornucopia (or Elevation of Privilege) for teaching, then we ought to be thinking about how well they work in comparison to other techniques. We have anecdotes, but to improve, we must test and measure.
There’s a really interesting podcast with Robert Hurlbut
Chris Romeo and Tony UcedaVelez on the PASTA approach to threat modeling. The whole podcast is interesting, especially hearing Chris and Tony discuss how an organization went from STRIDE to CAPEC and back again.
There’s a section where they discuss the idea of “think like an attacker,” and Chris brings up some of what I’ve written (“‘Think Like an Attacker’ is an opt-in mistake.”) I think that both Chris and Tony make excellent points, and I want to add some nuance around the frame. I don’t think the opposite of “think like an attacker” is “use a checklist,” I think it’s “reason by analogy to find threats” or “use a structured approach to finding threats.” Reasoning by analogy is, admittedly, hard for a variety of reasons, which I’ll leave aside for now. But reasoning by analogy requires that you have a group of abstracted threats, and that you consider ‘how does this threat apply to my system?’ You can use a structured approach such as STRIDE or CAPEC or an attack tree, or even an unstructured, unbounded set of threats (we call this brainstorming.) That differs from good checklists in that the items in a good checklist have clear yes or no answers. For more on my perspective on checklists, take a look at my review of Gawande’s Checklist Manifesto.
In “Threat Modeling Crypto Back Doors,” I wrote:
In the same vein, the requests and implementations for such back-doors may be confidential or classified. If that’s the case, the features may not go through normal tracking for implementation, testing, or review, again reducing the odds that they are secure. Of course, because such a system is designed to bypass other security controls, any weaknesses are likely to have outsized impact.
It sounds like exactly what I predicted has occurred. As Joseph Menn reports in “Yahoo secretly scanned customer emails for U.S. intelligence:”
When Stamos found out that Mayer had authorized the program, he resigned as chief information security officer and told his subordinates that he had been left out of a decision that hurt users’ security, the sources said. Due to a programming flaw, he told them hackers could have accessed the stored emails.
(I should add that I did not see anything like this at Microsoft, but had thought about how it might have unfolded as I wrote what I wrote in the book excerpt above.)
Crypto back doors are a bad idea, and we cannot implement them without breaking the security of the internet.