Category: product management

Safety and Security in Automated Driving

Safety First For Automated Driving” is a big, over-arching whitepaper from a dozen automotive manufacturers and suppliers.

One way to read it is that those disciplines have strongly developed safety cultures, which generally do not consider cybersecurity problems. This paper is the cybersecurity specialists making the argument that cyber will fit into safety, and how to do so.

In a sense, this white paper captures a strategic threat model. What are we working on? Autonomous vehicles. What can go wrong? Security issues of all types. What are we going to do? Integrate with and extend the existing safety discipline. Give specific threat information and mitigation strategies to component designers.

I find some parts of it surprising. (I would find it more surprising if I were to look at a 150 page document and not find anything surprising.)

Contrary to the commonly used definition of an [minimal risk condition, (MRC)], which describes only a standstill, this publication expands the definition to also include degraded operation and takeovers by the vehicle operator. Final MRCs refer to MRCs that allow complete deactivation of the automated driving system, e.g. standstill or takeover by the vehicle operator.

One of the “minimal risk” maneuvers listed (table 4) is an emergency stop. And while an emergency stop may certainly be a risk minimizing action in some circumstances, describing it as such is surprising, especially when presented in contrast to a “safe stop” maneuver.

It’s important to remember that driving is incredibly dangerous. In the United States in 2018, an estimated 40,000 people lost their lives in car crashes, and 4.5 million people were seriously injured. (I’ve seen elsewhere that a million of those are hospitalized.) A great many of those injuries are caused by either drunk or distracted drivers, and autonomous vehicles could save many lives, even if imperfect.

Which brings me to a part that I really like, which is the ‘three dimensions of risk treatment’ figure (Figure 8, shown). Words like “risk” and “risk management” encompass a lot, and this figure is a nice side contribution of the paper.

I also like Figure 27 & 28 (shown), showing risks associated with a generic architecture. Having this work available allows systems builders to consider the risks to various components they’re working on. Having it available lets us have a conversation about the systematic risks that exist, but also, allows security experts to ask “is this the right set of risks for systems builders to think about?”

A chart of system components in an autonomous vehicle

The Road to Mediocrity

Google Docs has chosen to red-underline the word “feasible,” which, as you can see, is in its dictionary, to suggest “possible.” “Possible,” possibly, was not the word I selected, because it means something different.

Good writing is direct. Good writing respects the reader. Good writing doesn’t tax the reader accidentally. It uses simple words when possible, effectively utilizing, no wait, utilize means you’re attempting to make your writing sound fancier than it need be. Never use “utilize” when its feasible to say “use.”

Good writing tools are unobtrusive. They don’t randomize the writer away from what they’re working on to try to figure out why in holy hell it’s wrong to be using the word feasible and why it needs to be replaced.

The road to mediocre writing is paved with over-simplification and distraction.

My current go-to is Pinker’s The Sense of Style. What else helps you think about writing?

Passwords Advice

Bruce Marshall has put together a comparison of OWASP ASVS v3 and v4 password requirements: OWASP ASVS 3.0 & 4.0 Comparison. This is useful in and of itself, and is also the sort of thing that more standards bodies should do, by default.

It’s all too common to have a new standard come out without clear diffs. It’s all too common for new standards to build closely on other standards, without clearly saying what they’ve altered and why. This leaves the analysis of ‘what’s different’ to each user of the standards. It increases the probability of errors. Both drive cost and waste effort. We should judge standards on their delivery of these important contextual documents.

The White Box Essays (Book Review)

The White Box, and its accompanying book, “The White Box Essays” are a FANTASTIC resource, and I wish I’d had them available to me as I designed Elevation of Privilege and helped with Control-Alt-Hack.

The book is for people who want to make games, and it does a lovely job of teaching you how, including things like the relationship between story and mechanics, the role of luck, how the physical elements teach the players, and the tradeoffs that you as a designer make as you design, prototype, test, refine and then get your game to market. In the go-to-market side, there are chapters on self-publishing, crowdfunding, what needs to be on a box.

The Essays don’t tell you how to create a specific game, they show you how to think about the choices you can make, and their impact on the game. For example:

Consider these three examples of ways randomness might be used (or not) in a design:

  • Skill without randomness (e.g., chess). With no random elements, skill is critical. The more skilled a player is, the greater their odds to win. The most skilled player will beat a new player close to 100% of the time.
  • Both skill and randomness (e.g., poker). Poker has many random elements, but a skilled player is better at choosing how to deal with those random elements than an unskilled one. The best poker player can play with new players and win most of the time, but the new players are almost certain to win a few big hands. (This is why there is a larger World Series of Poker than World Chess Championship — new players feel like they have a chance against the pros at poker. Since more players feel they have a shot at winning, more of them play, and the game is more popular.)
  • Randomness without skill (e.g., coin-flipping). There is no way to apply skill to coin-flipping and even the “best” coin flipper in the world can’t do better than 50/50, even against a new player.

The chapter goes on to talk about how randomness allows players to claim both credit and avoid blame, when players make choices about die rolls and the impact on gameplay, and a host of other tradeoffs.

The writing is solid: it’s as long as it needs to be, and then moves along (like a good game). What do you need to do, and why? How do you structure your work? If you’ve ever thought about designing a game, you should buy this book. But more than the book, there’s a boxed set, with meeples, tokens, cubes, and disks for you to use as you prototype. (And in the book is a discussion of how to use them, and the impact of your choices on production costs.)

I cannot say enough good things about this. After I did my first game design work, I went and looked for a collection of knowledge like this, and it didn’t exist. I’m glad it now does.

Image from Atlas Games.

Leave Those Numbers for April 1st

“90% of attacks start with phishing!*” “Cyber attacks will cost the world 6 trillion by 2020!”

We’ve all seen these sorts of numbers from vendors, and in a sense they’re April Fools day numbers: you’d have to be a fool to believe them. But vendors quote insane because there’s no downside and much upside. We need to create more and worse downside, and the road there lies through losing sales.

We need to call vendors on these number, and say “I’m sorry, but if you’d lie to me about that, what about the numbers you’re claiming that are hard to verify? The door is to your left.”

If we want to change the behavior, we have to change the impact of the behavior. We need to tell vendors that there’s no place for made up numbers, debunked numbers, unsupported numbers in our buying processes. If those numbers are in their sales and marketing material, they’re going to lose business for it.

* This one seems to trace back to analysis that 90% of APT attacks in the Verizon DBIR started with phishing, but APT and non-APT attacks are clearly different.

Incentives and Multifactor Authentication

It’s well known that adoption rates for multi-factor authentication are poor. For example, “Over 90 percent of Gmail users still don’t use two-factor authentication.”

Someone was mentioning to me that there are bonuses in games. You get access to special rooms in Star Wars Old Republic. There’s a special emote in Fortnite. (Above)

How well do these incentives work? Are there numbers out there?

Threat Model Thursday: Architectural Review and Threat Modeling

For Threat Model Thursday, I want to use current events here in Seattle as a prism through which we can look at technology architecture review. If you want to take this as an excuse to civilly discuss the political side of this, please feel free.

Seattle has a housing and homelessness crisis. The cost of a house has risen nearly 25% above the 2007 market peak, and has roughly doubled in the 6 years since April 2012. Fundamentally, demand has outstripped supply and continues to do so. As a city, we need more supply, and that means evaluating the value of things that constrain supply. This commentary from the local Libertarian party lists some of them.

The rules on what permits are needed to build a residence, what housing is acceptable, or how many unrelated people can live together (no more than eight) are expressions of values and priorities. We prefer that the developers of housing not build housing rather than build housing that doesn’t comply with the city’s Office of Planning and Community Development 32 pages of neighborhood design guidelines. We prefer to bring developers back after a building is built if the siding is not the agreed color. This is a choice that expresses the values of the city. And because I’m not a housing policy expert, I can miss some of the nuances and see the effect of the policies overall.

Let’s transition from the housing crisis here in Seattle to the architecture crisis that we face in technology.

No, actually, I’m not quite there. The city killed micro-apartments, only to replace them with … artisanal micro-houses. Note the variation in size and shape of the two houses in the foreground. Now, I know very little about construction, but I’m reasonably confident that if you read the previous piece on micro-housing, many of the concerns regulators were trying to address apply to “True Hope Village,” construction pictured above. I want you, dear reader, to read the questions about how we deliver housing in Seattle, and treat them as a mirror into how your organization delivers software. Really, please, go read “How Seattle Killed Micro-Housing” and the “Neighborhood Design Guidelines” carefully. Not because you plan to build a house, but as a mirror of your own security design guidelines.

They may be no prettier.

In some companies, security is valued, but has no authority to force decisions. In others, there are mandatory policies and review boards. We in security have fought for these mandatory policies because without them, products ignored security. And similarly, we have housing rules because of unsafe, unsanitary or overcrowded housing. To reduce the blight of slums.

Security has design review boards which want to talk about the color of the siding a developer installed on the now live product. We have design regulation which kills apodments and tenement housing, and then glorifies tiny houses. From a distance, these rules make no sense. I didn’t find it sensible, myself. I remember a meeting with the Microsoft Crypto board. I went in with some very specific questions regarding parameters and algorithms. Should we use this hash algorithm or that one? The meeting took not five whole minutes to go off the rails with suggestions about non-cryptographic architecture. I remember shipping the SDL Threat Modeling Tool, going through the roughly five policy tracking tools we had at the time, discovering at the very last minute that we had extra rules that were not documented in the documents that I found at the start. It drives a product manager nuts!

Worse, rules expand. From the executive suite, if a group isn’t growing, maybe it can shrink? From a security perspective, the rapidly changing threat landscape justifies new rules. So there’s motivation to ship new guidelines that, in passing, spend a page explaining all the changes that are taking place. And then I see “Incorporate or acknowledge the best features of existing early to mid-century buildings in new development.” What does that mean? What are the best features of those buildings? How do I acknowledge them? I just want to ship my peer to peer blockchain features! And nothing in the design review guidelines is clearly objectionable. But taken as a whole, they create a complex and unpredictable, and thus expensive path to delivery.

We express values explicitly and implicitly. In Seattle, implicit expression of values has hobbled the market’s ability to address a basic human need. One of the reasons that embedding is effective is that the embedded gatekeepers can advise, interpret in relation to real questions. Embedding expresses the value of collaboration, of dialogue over review. Does your security team express that security is more important than product delivery? Perhaps it is. When Microsoft stood down product shipping for security pushes, it was an explicit statement. Making your values explicit and debating prioritization is important.

What side effects do your security rules have? What rule is most expensive to comply with? What initiatives have you killed, accidentally or intentionally?

The DREAD Pirates

Then he explained the name was important for inspiring the necessary fear. You see, no one would surrender to the Dread Pirate Westley.

The DREAD approach was created early in the security pushes at Microsoft as a way to prioritize issues. It’s not a very good way, you see no one would surrender to the Bug Bar Pirate, Roberts. And so the approach keeps going, despite its many problems.

There are many properties one might want in a bug ranking system for internally found bugs. They include:

  • A cool name
  • A useful mnemonic
  • A reduction in argument about bugs
  • Consistency between raters
  • Alignment with intuition
  • Immutability of ratings: the bug is rated once, and then is unlikely to change
  • Alignment with post-launch/integration/ship rules

DREAD certainly meets the first of these, and perhaps the second two. And it was an early attempt at a multi-factor rating of bugs. But there are many problems which DREAD brings that newer approaches deal with.

The most problematic aspect of DREAD is that there’s little consistency, especially in the middle. What counts as a 6 damage versus 7, or 6 versus 7 exploitability? Without calibration, different raters will not be consistent. Each of the scores can be mis-estimated, and there’s a tendency to underestimate things like discoverability of bugs in your own product.

The second problem is that you set an arbitrary bar for fixes, for example, everything above a 6.5 gets fixed. That makes the distinction between a 6 and a 7 sometimes matters a lot. The score does not relate to what needs to get fixed when found externally.

This illustrates why Discoverability is an odd things to bring into the risk equation. You may have a discoverability of “1” on Monday, and 10 on Tuesday. (“Thanks, Full-Disclosure!”) So something could have a 5.5 DREAD score because of low discoverability but require a critical update. Suddenly the DREAD score of the issue is mutable. So it’s hard to use DREAD on an externally discovered bug, or one delivered via a bug bounty. So now you have two bug-ranking systems, and what do you do when they disagree? This happened to Microsoft repeatedly, and led to the switch to a bug bar approach.

Affected users is also odd: does an RCE in Flight Simulator matter less than one in Word? Perhaps in the grand scheme of things, but I hope the Flight Simulator team is fixing their RCEs.

Stepping beyond the problems internal to DREAD to DREAD within a software organization, it only measures half of what you need to measure. You need to measure both the security severity and the fix cost. Otherwise, you run the risk of finding something with a DREAD of 10, but it’s a key feature (Office Macros), and so it escalates, and you don’t fix it. There are other issues which are easy to fix (S3 bucket permissions), and so it doesn’t matter if you thought discoverability was low. This is shared by other systems, but the focus on a crisp line in DREAD, everything above a 6.5 gets fixed, exacerbates the issue.

For all these reasons, with regards to DREAD? Fully skeptical, and I have been for over a decade. If you want to fix these things, the thing to do is not create confusion by saying “DREAD can also be a 1-3 system!”, but to version and revise DREAD, for example, by releasing DREAD 2. I’m exploring a similar approach with DFDs.

I’m hopeful that this post can serve as a collection of reasons to not use DREAD v1, or properties that a new system should have. What’d I miss?

Joining the Continuum Team

I’m pleased to share the news that I’ve joined Continuum Security‘s advisory board. I am excited about the vision that Continuum is bringing to software security: “We help you design, build and manage the security of your software solutions.” They’re doing so for both happy customers and a growing community. And I’ve come to love their framing: “Security is not special. Performance, quality and availability is everyone’s responsibility and so is security. After all, who understands the code and environment better than the developers and ops teams themselves?” They’re right. Security has to earn our seat at the table. We have to learn to collaborate better, and that requires software that helps the enterprise manage application security risk from the start of, and throughout, the software development process.

“[Mukhande Singh] said “real water” should expire after a few months. His does. “It stays most fresh within one lunar cycle of delivery,” he said. “If it sits around too long, it’ll turn green. People don’t even realize that because all their water’s dead, so they never see it turn green.”
(Unfiltered Fervor: The Rush to Get Off the Water Grid, Nellie Bowles, NYTimes, Dec 29, 2017.)
So those things turning the water green? Apparently, not bugs, but features. In unrelated “not understanding food science” news, don’t buy the Mellow sous vide machine. Features.

Not Bugs, but Features

Navigation