Bicycling and Threat Modeling

Bikeshare

The Economist reports on the rise of dockless bike sharing systems in China, along with the low tech ways that the system is getting hacked:

The dockless system is prone to abuse. Some riders hide the bikes in or near their homes to prevent others from using them. Another trick involves photographing a bike’s QR code and then scratching it off to stop others from scanning it. With the stored image, the rider can then monopolise the machine. But customers caught misbehaving can have points deducted from their accounts, making it more expensive for them to rent the bikes.

Gosh, you mean you give people access to expensive stuff and they ride off into the sunset?

Threat modeling is an umbrella for a set of practices that let an organization find these sorts of attacks early, while you have the greatest flexibility in choosing your response. There are lots of characteristics we could look for: practicality, cost-effectiveness, consistency, thoroughness, speed, et cetera, and different approaches will favor one or the other. One of those characteristics is useful integration into business.

You can look at thoroughness by comparing bikes to the BMW carshare program I discussed in “The Ultimate Stopping Machine,” I think that the surprise that ferries trigger an anti-theft mechanism is somewhat surprising, and I wouldn’t dismiss a threat modeling technique, or criticize a team too fiercely for missing it. That is, there’s nuance. I’d be more critical of a team in Seattle missing the ferry issue than I would be of a team in Boulder.)

In the case of the dockless bikes, however, I would be skeptical of a technique that missed “reserving” a bike for your ongoing use. That threat seems like an obvious one from several perspectives, including that the system is labelled “dockless,” so you have an obvious contrast with a docked system.

When you find these things early, and iterate around threats, requirements and mitigations, you find opportunities to balance and integrate security in better ways than when you have to bolt it on later. (I discuss that iteration here and here.

For these bikes, perhaps the most useful answer is not to focus on misbehavior, but to reward good behavior. The system wants bikes to be used, so reward people for leaving the bikes in a place where they’re picked up soon? (Alternately, perhaps make it expensive to check out the same bike more than N times in a row, where N is reasonably large, like 10 or 15.)

Photo by Viktor Kern.

The Ultimate Stopping Machine?

a metal Immobilizer around a tireSecurity is hard in the real world. There’s an interesting story on Geekwire, “BMW’s ReachNow investigating cases of cars getting stuck on Washington State Ferries.” The story:

a ReachNow customer was forced to spend four hours on the Whidbey Island ferry this weekend because his vehicle’s wheels were locked, making the vehicle immovable unless dragged. The state ferry system won’t let passengers abandon a car on the ferry because of security concerns.

BMW’s response:

We believe that the issue is related to a security feature built into the vehicles that kicks in when the car is moving but the engine is turned off and the doors are closed.

I first encountered these immobilizing devices on a friend’s expensive car in 1999 or so. The threat is thieves equipped with a towtruck. It’s not super-surprising to discover that a service like Reachnow, where “random” people can get into a car and drive it away will have tracking devices in those cars. It’s a little more surprising that there are immobilizers in them.

Note the competing definitions of security (emphasis added in both quotes above):

  • BMW is worried about theft.
  • The state ferry system is worried about car bombs.
  • Passengers might worry about being detained next to a broken car, or about bugs in the immobilization technology. What if that kicks in on the highway because “a wire gets loose”?

In “The Evolution of Secure Things,” I wrote:

It’s about the constant imperfection of products, and how engineering is a response to perceived imperfections. It’s about the chaotic real world from which progress emerges. In a sense, products are never perfected, but express tradeoffs between many pressures, like manufacturing techniques, available materials, and fashion in both superficial and deep ways.

Surprise! There’s a way to move a vehicle a long distance with the engine off, and it’s not a tow truck!

Real products, introduced into the real world, will often involve surprises like this. One characteristic of a good security architecture is that there’s the right degree of adjustability in the product, and judging that is still a matter of engineering experience.

Similarly, one of the lessons of entrepreneurship is that the problems you experience are often surprising. Investors look for flexibility in the leaders they back because they know that they’ll be surprised along the way.

Threat Modeling & IoT

Threat modeling internet-enabled things is similar to threat modeling other computers, with a few special tensions that come up over and over again. You can start threat modeling IoT with the four question framework:

  1. What are you building?
  2. What can go wrong?
  3. What are you going to do about it?
  4. Did we do a good job?

But there are specifics to IoT, and those specifics influence how you think about each of those questions. I’m helping a number of companies who are shipping devices, and I would love to fully agree that “consumers shouldn’t have to care about the device’s security model. It should just be secure. End of story.” I agree with Don Bailey on the sentiment, but frequently the tensions between requirements mean that what’s secure is not obvious, or that “security” conflicts with “security.” (I model requirements as part of ‘what are you building.’)

When I train people to threat model, I use this diagram to talk about the interaction between threats, mitigations, and requirements:

Threats mitigations requirements

The interaction has a flavor in working with internet-enabled things, and that interaction changes from device to device. There are some important commonalities.

When looking at what you’re building, IoT devices typically lack sophisticated input devices like keyboards or even buttons, and sometimes their local output is a single LED. One solution is to put a web server on the device listening, and to pay for a sticker with a unique admin password, which then drives customer support costs. Another solution is to have the device not listen but to reach out to your cloud service, and let customers register their devices to their cloud account. This has security, privacy, and COGS downsides tradeoffs. [Update: I said downsides, but it’s more a different set of attack vectors become relevant in security. COGS is an ongoing commitment to operations; privacy is dependent on what’s sent or stored.]

When asking what can go wrong, your answers might include “a dependency has a vulnerability,” or “an attacker installs their own software.” This is an example of security being in tension with itself is the ability to patch your device yourself. If I want to be able to recompile the code for my device, or put a safe version of zlib on there, I ought to be able to do so. Except if I can update the device, so can attackers building a botnet, and 99.n% of typical consumers for a smart lightbulb are not going to patch themselves. So we get companies requiring signed updates. Then we get to the reality that most consumer devices last longer than most silicon valley companies. So we want to see a plan to release the key if the company is unable to deliver updates. And that plan runs into the realities of bankruptcy law, which is that that signing key is an asset, and its hard to value, and bankruptcy trustees are unlikely to give away your assets. There’s a decent pattern (allegedly from the world of GPU overclocking), which is that you can intentionally make your device patchable by downloading special software and moving a jumper. This requires a case that can be opened and reclosed, and a jumper or other DFU hardware input, and can be tricky on inexpensive or margin-strained devices.

That COGS (cost of goods sold) downside is not restricted to security, but has real security implications, which brings us to the question of what are you going to do about it. Consumers are not used to subscribing to their stoves, nor are farmers used to subscribing to their tractors. Generally, both have better things to do with their time than to understand the new models. But without subscription revenue, it’s very hard to make a case for ongoing security maintenance. And so security comes into conflict with consumer mental models of purchase.

In the IoT world, the question of did we do a good job becomes have we done a good enough job? Companies believe that there is a first mover advantage, and this ties to points that Ross Anderson made long ago about the tension between security and economics. Good threat modeling helps companies get to those answers faster. Sharing the tensions help us understand what the tradeoffs look like, and with those tensions, organizations can better define their requirements and get to a consistent level of security faster.


I would love to hear your experiences about other issues unique to threat modeling IoT, or where issues weigh differently because of the IoT nature of a system!

(Incidentally, this came from a question on Twitter; threading on Twitter is now worse than it was in 2010 or 2013, and since I have largely abandoned the platform, I can’t figure out who’s responding to what. A few good points I see include:

How Not to Design an Error Message

SC07FireAlarm

The voice shouts out: “Detector error, please see manual.” Just once, then a few hours later. And when I did see the manual, I discovered that it means “Alarm has reached its End of Life

No, really. That’s how my fire alarm told me that it’s at its end of life. By telling me to read the manual. Why it doesn’t say “device has reached end of life?” That would be direct and to the point. But no. When you press the button, it says “please see manual.” Now, this was a 2009 device, so maybe, just maybe, there was a COGS issue in how much storage was needed.

But sheesh. Warning messages should be actionable, explanatory and tested. At least it was loud and annoying.

2017 and Tidal Forces

There are two great blog posts at Securosis to kick off the new year:

Both are deep and important and worth pondering. I want to riff on something that Rich said:

On the security professional side I have trained hundreds of practitioners on cloud security, while working with dozens of organizations to secure cloud deployments. It can take years to fully update skills, and even longer to re-engineer enterprise operations, even without battling internal friction from large chunks of the workforce…

It’s worse than that. Yesterday Recently on Emergent Chaos, I talked about Red Queen Races, where you have to work harder and harder just to keep up.

In the pre-cloud world, you could fully update your skills. You could be an expert on Active Directory 2003, or Checkpoint’s Firewall-1. You could generate friction over moving to AD2012. You no longer have that luxury. Just this morning, Amazon launched a new rev of something. Google is pushing a new rev of its G-Suite to 5% of customers. Your skillset with the prior release is now out of date. (I have no idea if either really did this, but they could have.) Your skillset can no longer be a locked-in set of skills and knowledge. You need the meta-skills of modeling and learning. You need to understand what your model of AWS is, and you need to allocate time and energy to consciously learning about it.

That’s not just a change for individuals. It’s a change for how organizations plan for training, and it’s a change for how we should design training, as people will need lots more “what’s new in AWS in Q1 2017” training to augment “intro to AWS.”

Tidal forces, indeed.

The Dope Cycle and the Two Minutes Hate

[Updated with extra links at the bottom.]

There’s a cycle that happens as you engage on the internet. You post something, and wait, hoping, for the likes, the favorites, the shares, the kind comments to come in. You hit reload incessantly even though the site doesn’t need it, hoping to get that hit that jolt even a little sooner. That dopamine release.

A Vicious cycle of pain, cravings, more drugs, and guilt

Site designers refer to this by benign names, like engagement or gamification and it doesn’t just happen on “social media” sites like Twitter or Instagram. It is fundamental to the structure of LinkedIn, of Medium, StackExchange, of Flickr. We are told how popular are the things we observe, and we are told to want that popularity. Excuse me, I mean that influence. That reach. And that brings me to the point of today’s post: seven tips to increase your social media impactfulness. Just kidding.

Not kidding: even when you know you’re being manipulated into wanting it, you want it. And you are being manipulated, make no mistake. Site designers are working to make your use of their site as pleasurable as possible, as emotionally engaging as possible. They’re caught up in a Red Queen Race, where they must engage faster and faster just to stay in place. And when you’re in such a race, it helps to steal as much as you can from millions of years of evolution. [Edit: I should add that this is not a moral judgement on the companies or the people, but rather an observation on what they must do to survive.] That’s dopamine, that’s adrenaline, that’s every hormone that’s been covered in Popular Psychology. It’s a dope cycle, and you can read that in every sense of the word dope.

This wanting is not innocent or harmless. Outrage, generating a stronger response,
wins. Sexy, generating a stronger response, wins. Cuteness, in the forms of awwws, wins. We are awash in messages crafted to generate strong emotion. More, we are awash in messages crafter to generate stronger emotion than the preceding or following message. This is not new. What is new is that the analytic tools available to its creators are so strong that the Red Queen Race is accelerating (by the way, that’s bait for outraged readers to insist I misunderstand the Red Queen Race, generating views for this post). The tools of 20th century outrage are crude and ineffective. Today’s outrage cycle over the House cancelling its cancellation of its ethics office is over, replaced by outrage over … well, it’s not year clear what will replace it, but expect it to be replaced.

When Orwell wrote of the Two Minutes Hate, he wrote:

The horrible thing about the Two Minutes Hate was not that one was obliged to act a part, but that it was impossible to avoid joining in. Within thirty seconds any pretense was always unnecessary. A hideous ecstasy of fear and vindictiveness, a desire to kill, to torture, to smash faces in with a sledge hammer, seemed to flow through the whole group of people like an electric current, turning one even against one’s will into a grimacing, screaming lunatic. And yet the rage that one felt was an abstract, undirected emotion which could be switched from one object to another like the flame of a blowlamp.

I am reminded of Hoder’s article, “The Web We Have to Save” (4.4K hearts, 165 balloons, and no easy way to see on Medium how many sites link to it). Also of related interest is Good-bye to All That Twitter and “Seattle author Lindy West leaves Twitter, calls it unusable for ‘anyone but trolls, robots and dictators’” but I don’t think Twitter, per se, is the problem. Twitter has a number of aspects which make trolling (especially around gender and race issues, but not limited to them) especially emotionally challenging. Those are likely closely tied to the anticipation of positivity in “mentions”, fulfilled by hate. But the issues are made worse by site design that successfully increases engagement.

I don’t know what to do with this observation. I have tried to reduce use of sites that use the structures of engagement: removing them from my reading in the morning, taking their apps off my phone. But I find myself typing their URLs when I’m task switching. I am reluctant to orient around addiction, as it drags with it a great deal of baggage around free will and ineffective regulation.

But removing myself from Twitter doesn’t really address the problem of the two minutes hate, nor of the red queen race of dope cycles. I’d love to hear your thoughts on what to do about them.


[Update: Related, “Hacking the Attention Economy,” by danah boyd.]

[Update (8 Feb): Hunter Walk writes “Why Many Companies Mistakingly Think Trolls & Harassment Are Good for Business,” and I’d missed Tim Wu writing on “The Attention Merchants.”]

Diagrams in Threat Modeling

When I think about how to threat model well, one of the elements that is most important is how much people need to keep in their heads, the cognitive load if you will.

In reading Charlie Stross’s blog post, “Writer, Interrupted” this paragraph really jumped out at me:

One thing that coding and writing fiction have in common is that both tasks require the participant to hold huge amounts of information in their head, in working memory. In the case of the programmer, they may be tracing a variable or function call through the context of a project distributed across many source files, and simultaneously maintaining awareness of whatever complex APIs the object of their attention is interacting with. In the case of the author, they may be holding a substantial chunk of the plot of a novel (or worse, an entire series) in their head, along with a model of the mental state of the character they’re focussing on, and a list of secondary protagonists, while attempting to ensure that the individual sentence they’re currently crafting is consistent with the rest of the body of work.

One of the reasons that I’m fond of diagrams is that they allow the threat modelers to migrate information out of their heads into a diagram, making room for thinking about threats.

Lately, I’ve been thinking a lot about threat modeling tools, including some pretty interesting tools for automated discovery of existing architecture from code. That’s pretty neat, and it dramatically cuts the cost of getting started. Reducing effort, or cost, is inherently good. Sometimes, the reduction in effort is an unalloyed good, that is, any tradeoffs are so dwarfed by benefits as to be unarguable. Sometimes, you lose things that might be worth keeping, either as a hobby like knitting or in the careful chef preparing a fine meal.

I think a lot about where drawing diagrams on a whiteboard falls. It has a cost, and that cost can be high. “Assemble a team of architect, developer, test lead, business analyst, operations and networking” reads one bit of advice. That’s a lot of people for a cross-functional meeting.

That meeting can be a great way to find disconnects in what people conceive of building. And there’s a difference between drawing a diagram and being handed a diagram. I want to draw that out a little bit and ask for your help in understanding the tradeoffs and when they might and might not be appropriate. (Gary McGraw is fond of saying that getting these people in a room and letting them argue is the most important step in “architectural risk analysis.” I think it’s tremendously valuable, and having structures, tools and methods to help them avoid ratholes and path dependency is a big win.)

So what are the advantages and disadvantages of each?

Whiteboard

  • Collaboration. Walking to the whiteboard and picking up a marker is far less intrusive than taking someone’s computer, or starting to edit a document in a shared tool.
  • Ease of use. A whiteboard is still easier than just about any other drawing tool.
  • Discovery of different perspective/belief. This is a little subtle. If I’m handed a diagram, I’m less likely to object. An objection may contain a critique of someone else’s work, it may be a conflict. As something is being drawn on a whiteboard, it seems easier to say “what about the debug interface?” (This ties back to Gary McGraw’s point.)
  • Storytelling. It is easier to tell a story standing next to a whiteboard than any tech I’ve used. A large whiteboard diagram is easy to point at. You’re not blocking the projector. You can easily edit as you’re talking.
  • Messy writing/what does that mean? We’ve all been there? Someone writes something in shorthand as a conversation is happening, and either you can’t read it or you can’t understand what was meant. Structured systems encourage writing a few more words, making things more tedious for everyone around.

Software Tools

  • Automatic analysis. Tools like the Microsoft Threat Modeling tool can give you a baseline set of threats to which you add detail. Structure is a tremendous aid to getting things done, and in threat modeling, it helps in answering “what could go wrong?”
  • Authority/decidedness/fixedness. This is the other side of the discovery coin. Sometimes, there are architectural answers, and those answers are reasonably fixed. For example, hardware accesses are mediated by the kernel, and filesystem and network are abstracted there. (More recent kernels offer filesystems in userland, but that change was discussed in detail.) Similarly, I’ve seen large, complex systems with overall architecture diagrams, and a change to these diagrams had to be discussed and approved in advance. If this is the case, then a fixed diagram, printed poster size and affixed to walls, can also be used in threat modeling meetings as a context diagram. No need to re-draw it as a DFD.
  • Photographs of whiteboards are hard to archive and search without further processing.
  • Photographs of whiteboards may imply that ‘this isn’t very important.” If you have a really strong culture of “just barely good enough” than this might not be the case, but if other documents are more structured or cared for, then photos of a whiteboard may carry a message.
  • Threat modeling only late. If you’re going to get architecture from code, then you may not think about it until the code is written. If you weren’t going to threat model anyway, then this is a win, but if there was a reasonable chance you were going to do the architectural analysis while there was a chance to change the architecture, software tools may take that away.

(Of course, there are apps that help you take images from a whiteboard and improve them, for example, Best iOS OCR Scanning Apps, which I’m ignoring for purposes of teasing things out a bit. Operationally, probably worth digging into.)

I’d love your thoughts: are there other advantages or disadvantages of a whiteboard or software?

The Evolution of Apple’s Differential Privacy

Bruce Schneier comments on “Apple’s Differential Privacy:”

So while I applaud Apple for trying to improve privacy within its business models, I would like some more transparency and some more public scrutiny.

Do we know enough about what’s being done? No, and my bet is that Apple doesn’t know precisely what they’ll ship, and aren’t answering deep technical questions so that they don’t mis-speak. I know that when I was at Microsoft, details like that got adjusted as we learned from a bigger pile of real data from real customer use informed things. I saw some really interesting shifts surprisingly late in the dev cycle of various products.

I also want to challenge the way Matthew Green closes: “If Apple is going to collect significant amounts of new data from the devices that we depend on so much, we should really make sure they’re doing it right — rather than cheering them for Using Such Cool Ideas.”

But that is a false dichotomy, and would be silly even if it were not. It’s silly because we can’t be sure if they’re doing it right until after they ship it, and we can see the details. (And perhaps not even then.)

But even more important, the dichotomy is not “are they going to collect substantial data or not?” They are. The value organizations get from being able to observe their users is enormous. As product managers observe what A/B testing in their web properties means to the speed of product improvement, they want to bring that same ability to other platforms. Those that learn fastest will win, for the same reasons that first to market used to win.

Next, are they going to get it right on the first try? No. Almost guaranteed. Software, as we learned a long time ago, has bugs. As I discussed in “The Evolution of Secure Things:”

Its a matter of the pressures brought to bear on the designs of even what (we now see) as the very simplest technologies. It’s about the constant imperfection of products, and how engineering is a response to perceived imperfections. It’s about the chaotic real world from which progress emerges. In a sense, products are never perfected, but express tradeoffs between many pressures, like manufacturing techniques, available materials, and fashion in both superficial and deep ways.

Green (and Schneier) are right to be skeptical, and may even be right to be cynical. We should not lose sight of the fact that Apple is spending rare privacy engineering resources to do better than Microsoft. Near as I can tell, this is an impressive delivery on the commitment to be the company that respects your privacy, and I say that believing that there will be both bugs and design flaws in the implementation. Green has an impressive record of finding and calling Apple (and others) on such, and I’m optimistic he’ll have happy hunting.

In the meantime, we can, and should, cheer Apple for trying.

Sneak peeks at my new startup at RSA

Confusion

Many executives have been trying to solve the problem of connecting security to the business, and we’re excited about what we’re building to serve this important and unmet need. If you present security with an image like the one above, we may be able to help.

My new startup is getting ready to show our product to friends at RSA. We’re building tools for enterprise leaders to manage their security portfolios. What does that mean? By analogy, if you talk to a financial advisor, they have tools to help you see your total financial picture: assets and debts. They’ll help you break out assets into long term (like a home) or liquid investments (like stocks and bonds) and then further contextualize each as part of your portfolio. There hasn’t been an easy way to model and manage a portfolio of control investments, and we’re building the first.

If you’re interested, we have a few slots remaining for meetings in our suite at RSA! Drop me a line at [first]@[last].org, in a comment or reach out over linkedin.

Kale Caesar

According to the CBC: “McDonald’s kale salad has more calories than a Double Big Mac

NewImage

In a quest to reinvent its image, McDonald’s is on a health kick. But some of its nutrient-enhanced meals are actually comparable to junk food, say some health experts.

One of new kale salads has more calories, fat and sodium than a Double Big Mac.

Apparently, McDonalds is there not to braise kale, but to bury it in cheese and mayonnaise. And while that’s likely mighty tasty, it’s not healthy.

At a short-term level, this looks like good product management. Execs want salads on the menu? Someone’s being measured on sales of new salads, and loading them up with tasty, tasty fats. It’s effective at associating a desirable property of salad with the product.

Longer term, not so much. It breeds cynicism. It undercuts the ability of McDonalds to ever change its image, or to convince people that its food might be a healthy choice.