Continuum Interview

Continuum has released a video of me and Stuart Winter-Tear in conversation at the Open Security Summit:

“At the recent Open Security Summit we had the great pleasure of interviewing Adam Shostack about his keynote presentation “A seat at the table” and the challenge of getting security involved in product and application design. We covered numerous topics from the benefits brought to business by threat modeling to pooping unicorns.”

Carpenter!

The decision in Carpenter v. United States is an unusually positive one for privacy. The Supreme Court ruled that the government generally can’t access historical cell-site location records without a warrant. (SCOTUS Blog links to court documents. The court put limits on the “third party” doctrine, and it will be fascinating to see how those limits play out.

A few interesting links:

As I said previously, I am thankful to the fine folks at the Knight First Amendment Institute at Columbia University for the opportunity to help with their technologists amicus brief in this case, and I’m glad to see that the third party doctrine is under stress. That doctrine has weakened the clear aims of the fourth amendment in protecting our daily lives against warrantless searches as our lives have involved storing more of our “papers” outside our homes.

Image via the mobile pc guys, who have advice about how to check your location history on Google, which is one of many places where it may be being captured. That advice might still be useful — it’s hard to tell if the UI has changed, since I had turned off those features.

Threat Model Thursday: Architectural Review and Threat Modeling

For Threat Model Thursday, I want to use current events here in Seattle as a prism through which we can look at technology architecture review. If you want to take this as an excuse to civilly discuss the political side of this, please feel free.

Seattle has a housing and homelessness crisis. The cost of a house has risen nearly 25% above the 2007 market peak, and has roughly doubled in the 6 years since April 2012. Fundamentally, demand has outstripped supply and continues to do so. As a city, we need more supply, and that means evaluating the value of things that constrain supply. This commentary from the local Libertarian party lists some of them.

The rules on what permits are needed to build a residence, what housing is acceptable, or how many unrelated people can live together (no more than eight) are expressions of values and priorities. We prefer that the developers of housing not build housing rather than build housing that doesn’t comply with the city’s Office of Planning and Community Development 32 pages of neighborhood design guidelines. We prefer to bring developers back after a building is built if the siding is not the agreed color. This is a choice that expresses the values of the city. And because I’m not a housing policy expert, I can miss some of the nuances and see the effect of the policies overall.

Let’s transition from the housing crisis here in Seattle to the architecture crisis that we face in technology.

No, actually, I’m not quite there. The city killed micro-apartments, only to replace them with … artisanal micro-houses. Note the variation in size and shape of the two houses in the foreground. Now, I know very little about construction, but I’m reasonably confident that if you read the previous piece on micro-housing, many of the concerns regulators were trying to address apply to “True Hope Village,” construction pictured above. I want you, dear reader, to read the questions about how we deliver housing in Seattle, and treat them as a mirror into how your organization delivers software. Really, please, go read “How Seattle Killed Micro-Housing” and the “Neighborhood Design Guidelines” carefully. Not because you plan to build a house, but as a mirror of your own security design guidelines.

They may be no prettier.

In some companies, security is valued, but has no authority to force decisions. In others, there are mandatory policies and review boards. We in security have fought for these mandatory policies because without them, products ignored security. And similarly, we have housing rules because of unsafe, unsanitary or overcrowded housing. To reduce the blight of slums.

Security has design review boards which want to talk about the color of the siding a developer installed on the now live product. We have design regulation which kills apodments and tenement housing, and then glorifies tiny houses. From a distance, these rules make no sense. I didn’t find it sensible, myself. I remember a meeting with the Microsoft Crypto board. I went in with some very specific questions regarding parameters and algorithms. Should we use this hash algorithm or that one? The meeting took not five whole minutes to go off the rails with suggestions about non-cryptographic architecture. I remember shipping the SDL Threat Modeling Tool, going through the roughly five policy tracking tools we had at the time, discovering at the very last minute that we had extra rules that were not documented in the documents that I found at the start. It drives a product manager nuts!

Worse, rules expand. From the executive suite, if a group isn’t growing, maybe it can shrink? From a security perspective, the rapidly changing threat landscape justifies new rules. So there’s motivation to ship new guidelines that, in passing, spend a page explaining all the changes that are taking place. And then I see “Incorporate or acknowledge the best features of existing early to mid-century buildings in new development.” What does that mean? What are the best features of those buildings? How do I acknowledge them? I just want to ship my peer to peer blockchain features! And nothing in the design review guidelines is clearly objectionable. But taken as a whole, they create a complex and unpredictable, and thus expensive path to delivery.

We express values explicitly and implicitly. In Seattle, implicit expression of values has hobbled the market’s ability to address a basic human need. One of the reasons that embedding is effective is that the embedded gatekeepers can advise, interpret in relation to real questions. Embedding expresses the value of collaboration, of dialogue over review. Does your security team express that security is more important than product delivery? Perhaps it is. When Microsoft stood down product shipping for security pushes, it was an explicit statement. Making your values explicit and debating prioritization is important.

What side effects do your security rules have? What rule is most expensive to comply with? What initiatives have you killed, accidentally or intentionally?

Threat Model Thursday: Chromium Post-Spectre

Today’s Threat Model Thursday is a look at “Post-Spectre Threat Model Re-Think,” from a dozen or so folks at Google. As always, I’m looking at this from a perspective of what can we learn and to encourage dialogue around what makes for a good threat model.

What are we working on?

From the title, I’d assume Chromium, but there’s a fascinating comment in the introduction that this is wider: “any software that both (a) runs (native or interpreted) code from more than one source; and (b) attempts to create a security boundary inside a single address space, is potentially affected.” This is important, and in fact, why I decided to hightlight the model. The intro also states, “we needed to re-think our threat model and defenses for Chrome renderer processes.” In the problem statement, they mention that there are other, out of scope variants such as “a renderer reading the browser’s memory.”

It would be helpful to me, and probably others, to diagram this, both for the Chrome case (the relationship between browser and renderer) and the broader case of that other software, because the modern web browser is a complex beast. As James Mickens says:

A modern Web page is a catastrophe. It’s like a scene from one of those apocalyptic medieval paintings that depicts what would happen if Galactus arrived: people are tumbling into fiery crevasses and lamenting various lamentable things and hanging from playground equipment that would not pass OSHA safety checks. This kind of stuff is exactly what you’ll see if you look at the HTML, CSS, and JavaScript in a modern Web page. Of course, no human can truly “look” at this content, because a Web page is now like V’Ger from the first “Star Trek” movie, a piece of technology that we once understood but can no longer fathom, a thrashing leviathan of code and markup written by people so untrustworthy that they’re not even third parties, they’re fifth parties
who weren’t even INVITED to the party…

What can go wrong

There is a detailed set of ways that confidentiality breaks current boundaries. Most surprising to me is the claim that clock jitter is not as useful as we’d expect, and even enumerating all the clocks is tricky! (Webkit seems to have a different perspective, that reducing timer precision is meaningful.)

There is also an issue of when to pass autofilled data to a renderer, and a goal of “Ensure User Intent When Sending Data To A Renderer.” This is good, but usability may depend on normal people understanding that their renderer and browser are different. That’s mitigated by taking user gestures as evidence of intent. That seems like a decent balancing of usability and security, but as I watch people using devices, I see a lot of gesturing to explore and discover the rapidly changing meanings of gestures, both within applications and across different applications and passwords.

What are we going to do about it?

As a non-expert in browser design, I’m not going to attempt to restate the mitigations. Each of the defensive approaches is presented with clear discussion of its limitations and the current intent. This is both great to see, and hard to follow for those not deep in browser design. That form of writing is probably appropriate, because otherwise the meaning gets lost in verbosity that’s not useful to the people most impacted. I would like to see more out-linking as an aide to those trying to follow along.

Did we do a good job?

I’m very glad to see Google sharing this because we can see inside the planning of the architects, the known limits, and the demands on the supply chain (changes to compilers to reduce gadgets, changes to platforms to increase inter-process isolation), and in the end, “we now assume any active code can read any data in the same address space. The plan going forward must be to keep sensitive cross-origin data out of address spaces that run untrustworthy code.” Again, that’s more than just browsers. If your defensive approaches, mitigations or similar sections are this clear, you’re doing a good job.

‘EFAIL’ Is Why We Can’t Have Golden Keys

I have a new essay at Dark Reading, “‘EFAIL’ Is Why We Can’t Have Golden Keys.” It starts:

There’s a newly announced set of issues labeled the “EFAIL encryption flaw” that reduces the security of PGP and S/MIME emails. Some of the issues are about HTML email parsing, others are about the use of CBC encryption. All show how hard it is to engineer secure systems, especially when those systems are composed of many components that had disparate design goals.

Conway’s Law and Software Security

In “Conway’s Law: does your organization’s structure make software security even harder?,” Steve Lipner mixes history and wisdom:

As a result, the developers understood pretty quickly that product security was their job rather than ours. And instead of having twenty or thirty security engineers trying to “inspect (or test) security in” to the code, we had 30 or 40 thousand software engineers trying to create secure code. It made a big difference.