IoT Security Workshop (Seattle, August)

Jean Camp and Yoshi Kohno are organizing an interesting workshop upcoming at the University of Washington on “Best Practices In The IoT:”

Our agenda begins with a presentation on the Federal Government initiatives in
the IoT. When collecting the reading materials for emerging standards, we found
nearly a thousand pages once all governmental materials are brought together…The product of the workshop will be a summary document identifying (i) a consensus set of graduated best practices for security and privacy for IoT in the home, and (ii) any gaps where best practices cannot yet be identified.

(I believe that the workshop organizers might agree with me regards the term “best practices,” but are driven by funders to use it.)

Also, they are searching for a few more sponsors if you can help in that department.

Bicycling and Threat Modeling

Bikeshare

The Economist reports on the rise of dockless bike sharing systems in China, along with the low tech ways that the system is getting hacked:

The dockless system is prone to abuse. Some riders hide the bikes in or near their homes to prevent others from using them. Another trick involves photographing a bike’s QR code and then scratching it off to stop others from scanning it. With the stored image, the rider can then monopolise the machine. But customers caught misbehaving can have points deducted from their accounts, making it more expensive for them to rent the bikes.

Gosh, you mean you give people access to expensive stuff and they ride off into the sunset?

Threat modeling is an umbrella for a set of practices that let an organization find these sorts of attacks early, while you have the greatest flexibility in choosing your response. There are lots of characteristics we could look for: practicality, cost-effectiveness, consistency, thoroughness, speed, et cetera, and different approaches will favor one or the other. One of those characteristics is useful integration into business.

You can look at thoroughness by comparing bikes to the BMW carshare program I discussed in “The Ultimate Stopping Machine,” I think that the surprise that ferries trigger an anti-theft mechanism is somewhat surprising, and I wouldn’t dismiss a threat modeling technique, or criticize a team too fiercely for missing it. That is, there’s nuance. I’d be more critical of a team in Seattle missing the ferry issue than I would be of a team in Boulder.)

In the case of the dockless bikes, however, I would be skeptical of a technique that missed “reserving” a bike for your ongoing use. That threat seems like an obvious one from several perspectives, including that the system is labelled “dockless,” so you have an obvious contrast with a docked system.

When you find these things early, and iterate around threats, requirements and mitigations, you find opportunities to balance and integrate security in better ways than when you have to bolt it on later. (I discuss that iteration here and here.

For these bikes, perhaps the most useful answer is not to focus on misbehavior, but to reward good behavior. The system wants bikes to be used, so reward people for leaving the bikes in a place where they’re picked up soon? (Alternately, perhaps make it expensive to check out the same bike more than N times in a row, where N is reasonably large, like 10 or 15.)

Photo by Viktor Kern.

Bicycling and Risk

A study found that those who cycle have a net 41% lower risk of premature death. Now, when I read that headline my first thought was that it was 100 people over 6 months and a statistical fluke. But no, they followed a quarter million Britons for 5 years.

Bike commuter

Now, it’s not obvious that it’s causal. Perhaps those who are healthier choose to ride to work? But it seems reasonable to assume that getting a bunch of exercise, fresh air, and adrenaline rushes as distracted drivers read their timeslines as they drive could lead to better health.

The paper is “Association between active commuting and incident cardiovascular disease, cancer, and mortality: prospective cohort study,” and a press discussion is at “Cycling to work may cut your risk of premature death by 40%.”

Photo by Jack Alexander.

Maintaining & Updating Software

In the aftermath of Wannacry, there’s a lot of discussion of organizations not updating their systems. There are two main reasons organizations don’t update the operating systems they run: compatibility and training. Training is simpler — you have to train people about the changes to the Start Menu to move them to Windows 8, and that’s expensive. (I sometimes worked with sales people when I was at Microsoft, and they could have managed this much better than they have.)

Compatability is harder. In his excellent blog post on “Who Pays?,” Steve Bellovin discusses how “achieving a significant improvement in a product’s security generally requires a new architecture and a lot of changed code. It’s not a patch, it’s a new release.” There are substantial changes to the ways memory is managed and laid out between the versions, including ASLR, DEP, CFG, etc. There are many changes and seeing how they impact real programs is hard. That’s part of the reason Microsoft released the Enhanced Mitigation Experiment Experience Toolkit.

Rusting car

This doesn’t just apply to platforms, it also applies to libraries. (For example, see Jon Callas, “Apple and OpenSSL.”

Even when compatibility is generally very high, someone needs to test the code to see if it works, and that costs money. It costs a lot more money if you don’t have test code, test documentation (YAGNI!) or if, umm, your test code has dependencies on libraries that don’t work on the new platform…It is unlikely that re-certifying on a new platform is less than weeks of work, and for larger products, it could easily extend to person years of work, to maintain software that’s already been sold. The costs are non-trivial, which brings me back to Steve Bellovin’s post:

There are, then, four basic choices. We can demand that vendors pay, even many years after the software has shipped. We can set up some sort of insurance system, whether run by the government or by the private sector. We can pay out of general revenues. If none of those work, we’ll pay, as a society, for security failures.

This is a fair summary, and I want to add two points.

First, it remains fashionable to bash Microsoft for all the world’s security woes, there is a far bigger problem, that of open source, which is usually released without any maintenance plan. (My friends at the Core Infrastructure Initiative are working on this problem.)

  • Code is speech. The United States rarely imposes liabilities on people for speaking, and it seems downright perverse to do so more if they let others use their words.
  • There may not be an organization, or the author of the code may have explicitly disclaimed that they’re responsible. If there is, and we as a society suddenly impose unexpected costs on them, that might inhibit future funding of open source. (As an example, the author of Postfix was paid by IBM for a while. Does IBM have responsibility for Postfix, now that he’s left and gone to Google?) How does the “releasing code” calculus change if you’re required to maintain it for ever?
  • The Open Source Definition prohibits discrimination against fields of endeavor, and requires licenses be technology neutral. So it seems hard to release an open source library and forbid the use of code in long-lived consumer goods.
  • What if Bob makes a change to Alice’s code, and introduces a security bug in a subtle way? What if Alice didn’t document that the code was managing a security issue? Does she need to fix it?

Second, the costs to society will not be evenly distributed: they’re going to fall on sectors with less software acumen, and places where products are repaired more than they’re replaced, which tend to be the poorer places and countries.

[Update: Ross Anderson blogs about a new paper that he wrote with Éireann Leverett and Richard Clayton. The paper is more focused on the regulatory challenge that maintaining and updating software provokes than the economics.]

Photo by Pawel Kadysz.

The Ultimate Stopping Machine?

a metal Immobilizer around a tireSecurity is hard in the real world. There’s an interesting story on Geekwire, “BMW’s ReachNow investigating cases of cars getting stuck on Washington State Ferries.” The story:

a ReachNow customer was forced to spend four hours on the Whidbey Island ferry this weekend because his vehicle’s wheels were locked, making the vehicle immovable unless dragged. The state ferry system won’t let passengers abandon a car on the ferry because of security concerns.

BMW’s response:

We believe that the issue is related to a security feature built into the vehicles that kicks in when the car is moving but the engine is turned off and the doors are closed.

I first encountered these immobilizing devices on a friend’s expensive car in 1999 or so. The threat is thieves equipped with a towtruck. It’s not super-surprising to discover that a service like Reachnow, where “random” people can get into a car and drive it away will have tracking devices in those cars. It’s a little more surprising that there are immobilizers in them.

Note the competing definitions of security (emphasis added in both quotes above):

  • BMW is worried about theft.
  • The state ferry system is worried about car bombs.
  • Passengers might worry about being detained next to a broken car, or about bugs in the immobilization technology. What if that kicks in on the highway because “a wire gets loose”?

In “The Evolution of Secure Things,” I wrote:

It’s about the constant imperfection of products, and how engineering is a response to perceived imperfections. It’s about the chaotic real world from which progress emerges. In a sense, products are never perfected, but express tradeoffs between many pressures, like manufacturing techniques, available materials, and fashion in both superficial and deep ways.

Surprise! There’s a way to move a vehicle a long distance with the engine off, and it’s not a tow truck!

Real products, introduced into the real world, will often involve surprises like this. One characteristic of a good security architecture is that there’s the right degree of adjustability in the product, and judging that is still a matter of engineering experience.

Similarly, one of the lessons of entrepreneurship is that the problems you experience are often surprising. Investors look for flexibility in the leaders they back because they know that they’ll be surprised along the way.

Adam & Chris Wysopal webcast

(Today) Wednesday, May 24th, 2017 at 1:00 PM EDT (17:00:00 UTC), Chris Wysopal and I are doing a SANS webcast, “Choosing the Right Path to Application Security.” I’m looking forward to it, and hope you can join us!

Update: the webcast is now archived, and the white paper associated with it, “Using Cloud Deployment to Jump-Start Application Security,” is in the SANS reading room.

Certificate pinning is great in stone soup

In his “ground rules” article, Mordaxus gives us the phrase “stone soup security,” where everyone brings a little bit and throws it into the pot. I always try to follow Warren Buffet’s advice, to praise specifically and criticize in general.

So I’m not going to point to a specific talk I saw recently, in which someone talked about pen testing IoT devices, and stated, repeatedly, that the devices, and device manufacturers, should implement certificate pinning. They repeatedly discussed how easy it was to add a self-signed certificate and intercept communication, and suggested that the right way to mitigate this was certificate pinning.

They were wrong.

If I own the device and can write files to it, I can not only replace the certificate, but I can change a binary to replace a ‘Jump if Equal’ to a ‘Jump if Not Equal,’ and bypass your pinning. If you want to prevent certificate replacement by the device owner, you need a trusted platform which only loads signed binaries. (The interplay of mitigations and bypasses that gets you
there is a fine exercise if you’ve never worked through it.)

When I train people to threat model, I use this diagram to talk about the interaction between threats, mitigations, and requirements:

Threats mitigations requirements

Is it a requirement that the device protect itself from the owner? If you’re threat modeling well, you can answer this question. You work through these interplaying factors. You might start from a threat of certificate replacement and work through a set of difficult to mitigate threats, and change your requirements. You might start from a requirements question of “can we afford a trusted bootloader?” and discover that the cost is too high for the expected sales price, leading to a set of threats that you choose not to address. This goes to the core of “what’s your threat model?” Does it include the device owner?

Is it a requirement that the device protect itself from the owner? This question frustrates techies: we believe that we bought it, we should have the right to tinker with it. But we should also look at the difference between the iPhone and a PC. The iPhone is more secure. I can restore it to a reasonable state easily. That is a function of the device protecting itself from its owner. And it frustrates me that there’s a Control Center button to lock orientation, but not one to turn location on or off. But I no longer jailbreak to address that. In contrast, a PC that’s been infected with malware is hard to clean to a demonstrably good state.

Is it a requirement that the device protect itself from the owner? It’s a yes or no question. Saying yes has impact on the physical cost of goods. You need a more expensive sophisticated boot loader. You have to do a bunch of engineering work which is both straightforward and exacting. If you don’t have a requirement to protect the device from its owner, then you don’t need to pin the certificate. You can take the money you’d spend on protecting it from its owner, and spend that money on other features.

Is it a requirement that the device protect itself from the owner? Engineering teams deserve a crisp answer to this question. Without a crisp answer, security risks running them around in circles. (That crisp answer might be, “we’re building towards it in version 3.”)

Is it a requirement that the device protect itself from the owner? Sometimes when I deliver training, I’m asked if we can fudge, or otherwise avoid answering. My answer is that if security folks want to own security decisions, they must own the hard ones. Kicking them back, not making tradeoffs, not balancing with other engineering needs, all of these reduce leadership, influence, and eventually, responsibility.

Is it a requirement that the device protect itself from the owner?

Well, is it?

Well-deserved accolades

Line drawing of Parisa Tabriz

When I saw that Wired had created a list, “20 People Who Are Creating the Future,” I didn’t expect to see anyone in security on it.

I was proven wrong in a wonderful way — #1 on their list is Parisa Tabriz, under the headline “Put Humans First, Code Second.” A great choice, a well-deserved honor for Parisa, and a bit of a rebuke to those who want to focus on code vulnerabilities, and say “you can’t patch human stupidity.”