Speculative Execution Threat Model

There’s a long and important blog post from Matt Miller, “Mitigating speculative execution side channel hardware vulnerabilities.”

What makes it important is that it’s a model of these flaws, and helps us understand their context and how else they might appear. It’s also nicely organized along threat modeling lines.

What can go wrong? There’s a set of primitives (conditional branch misprediction, indirect branch misprediction, and exception delivery or deferral). These are built into gadgets for windowing and disclosure gadgets.

There’s also models for mitigations including classes of ways to prevent speculative execution, removing sensitive content from memory and removing observation channels.

Citizen Threat Modeling and more data

Last week, in “Threat Modeling: Citizens Versus Systems,” I wrote:

I think that was a right call for the first project, because the secondary data flows are a can of worms, and drawing them would, frankly, look like a can of worms.

Many organizations don’t disclose them beyond saying “we share your data to deliver and improve the service,” those that do go farther disclose little about the specifics of what data is transferred to who.

Paypal Partnerships
Today, via Bruce Schneier, we see that Paypal has disclosed the list of over 600 companies they might share your data with. He rightly asks if that’s unusual. We don’t know. My instinct is that it’s not unusual for a financial multi-national.

I’m standing by the questions I asked; the first level of categories in the Paypal list may act as a good third level for our analysis. It will be interesting to see if others use the same categories. If they don’t, the analysis process is magnified.

Their categories are:

  1. Payment Processors
  2. Audit
  3. Customer Service outsourcing
  4. Credit reference and fraud agencies
  5. Financial products
  6. Commercial partnerships
  7. Marketing and public relations
  8. Operational services
  9. Group companies
  10. Commercial partners
  11. Legal
  12. Agencies

It’s unclear to me how 6 (“Commercial partnerships”) differs from 10 (“Commercial partners”). I say this because I’m curious, not to point and laugh. We should cut Paypal some slack and appreciate that this is a new process to handle a new legal requirement. I’m also curious if 12 (“agencies”) means “law enforcement agencies” or something else.

Visualization from How PayPal Shares Your Data.

Threat Modeling: Citizens Versus Systems

Recently, we shared a privacy threat model which was centered on the people of Seattle, rather than on the technologies they use.

Because of that, we had different scoping decisions than I’ve made previously. I’m working through what those scoping decisions mean.

First, we cataloged how data is being gathered. We didn’t get to “what can go wrong?” We didn’t ask about secondary uses or transfers — yet. I think that was a right call for the first project, because the secondary data flows are a can of worms, and drawing them would, frankly, look like a can of worms. We know that most of the data gathered by most of these systems is weakly protected from government agencies. Understanding what secondary data flows can happen will be quite challenging. Many organizations don’t disclose them beyond saying “we share your data to deliver and improve the service,” those that do go farther disclose little about the specifics of what data is transferred to who. So I’d like advice: how would you tackle secondary data flows?

Second, we didn’t systematically look at the question of what could go wrong. Each of those examinations could be roughly the size and effort of a product threat model. Each requires an understanding of a person’s risk profile: victims of intimate partner violence are at risk differently than immigrants. We suspect there’s models there, and working on them is a collaborative task. I’d like advice here. Are there good models of different groups and their concerns on which we could draw?

Threat Modeling Privacy of Seattle Residents

On Tuesday, I spoke at the Seattle Privacy/TechnoActivism 3rd Monday meeting, and shared some initial results from the Seattle Privacy Threat Model project.

Overall, I’m happy to say that the effort has been a success, and opens up a set of possibilities.

  • Every participant learned about threats they hadn’t previously considered. This is surprising in and of itself: there are few better-educated sets of people than those willing to commit hours of their weekends to threat modeling privacy.
  • We have a new way to contextualize the decisions we might make, evidence that we can generate these in a reasonable amount of time, and an example of that form.
  • We learned about how long it would take (a few hours to generate a good list of threats, a few hours per category to understand defenses and tradeoffs), and how to accelerate that. (We spent a while getting really deep into threat scenarios in a way that didn’t help with the all-up models.)
  • We saw how deeply and complexly mobile phones and apps play into privacy.
  • We got to some surprising results about privacy in your commute.

More at the Seattle Privacy Coalition blog, “Threat Modeling the Privacy of Seattle Residents,” including slides, whitepaper and spreadsheets full of data.

BlackHat and Human Factors

As a member of the BlackHat Review Board, I would love to see more work on Human Factors presented there. The 2018 call for papers is open and closes April 9th. Over the past few years, I think we’ve developed an interesting track with good material year over year.

I wrote a short blog post on what we look for.

The BlackHat CFP calls for work which has not been published elsewhere. We prefer fully original work, but will consider a new talk that explains work you’ve done for the BlackHat audience. Oftentimes, Blackhat does not count as “Publication” in the view of academic program committees, and so you can present something at BlackHat that you plan to publish later. (You should of course check with the other venue, and disclose that you’re doing so to BlackHat.)

If you’re considering submitting, I encourage you to read all three recommendations posts at https://usa-briefings-cfp.blackhat.com/

Keep the Bombe on the Bletchley Park Estate

There’s a fundraising campaign to “Keep the Bombe on the Bletchley Park Estate.”

The Bombe was a massive intellectual and engineering achievement at the British codebreaking center at Bletchley Park during the second world war. The Bombes were all disassembled after the war, and the plans destroyed, making the reconstruction of the Bombe at Bletchley a second impressive achievement.

My photo is from the exhibit on the reconstruction.

Doing Science With Near Misses

Last week at Art into Science, I presented “That was Close! Doing Science with Near Misses” (Google, pptx.)

The core idea is that we should borrow from aviation to learn from near misses, and learn to protect ourselves and our systems better. The longer form is in the draft “That Was Close! Reward Reporting of Cybersecurity ‘Near Misses’Voluntary Reporting of Cybersecurity “Near Misses”

The talk was super-well received and I’m grateful to Sounil Yu and the participants in the philosphy track, who juggled so we could collaborate and brainstorm. If you’d like to help, by far the most helpful way would be to tell us about a near miss you’ve experienced using our form, and give us feedback on the form. Since Thursday, I’ve added a space for that feedback, and made a few other suggested adjustments which were easy to implement.

If you’ve had a chance to think about definitions for either near misses or accidents, I’d love to hear about those, in comments, in your blog (trackbacks should work), or whatever works for you. If you were at Art Into Science, there’s a #near-miss channel on the conference Slack, and I’ll be cleaning up the notes.

Image from the EHS Database, who have a set of near miss safety posters.

Jonathan Marcil’s Threat Modeling Toolkit talk

There’s a lot of threat modeling content here at AppSec Cali, and sadly, I’m only here today. Jonathan Marcil has been a guest here on Adam & friends, and today is talking about his toolkit: data flow diagrams and attack trees.

His world is very time constrained, and it’s standing room only.

  • Threat modeling is an appsec activity, understand attackers and systems
  • For security practitioners and software engineers. A tool to help clarify what the system is for reviewers. Highlight ameliorations or requirements.
  • Help catch important things despite chaos.
  • Must be collaborative: communication is a key
  • Being wrong is great: people get engaged to correct you!
  • Data flow diagrams vs connection flow diagrams: visual overload. This is not an architectural doc, but an aid to security discussion. He suggests extending the system modeling approach to fit your needs, which is great, and is why I put my definition of a DFD3 on github; let’s treat our tools as artifacts like developers do.
  • An extended example of modeling Electrum.
  • The system model helps organize your own thoughts. Build a visual model of the things that matter to you, leave out the bits that don’t matter.
  • Found a real JSONRPC vuln in the wallet because of investigations driven by system model.
  • His models also have a “controls checklist;” “these are the controls I think we have.” Controls tied by numbers to parts of diagram. Green checklists are a great motivator.
  • Discussion of one line vs two; would another threat modeling expert be able to read this diagram? What would be a better approach for a SAML-based system? Do you need trust boundaries between the browser and the IDP? What’s going through your head as you build this?
  • Use attack trees to organize threat intelligence: roots are goals, leafs are routes to goals. If the goal is to steal cryptocurrency, one route is to gain wallet access, via stealing the physical wallet or software access. (Sorry, I’m bad at taking photos as I blog.) He shows the attack tree growing in a nice succession of slides.
  • Attack trees are useful because they’re re-usable.
  • Uses PlantUML to draw trees with code, has a bunch of advantages of version control, automatically balancing trees.
  • Questions: How to collaborate with and around threat models? How to roll out to a group of developers? How to sell them on doing something beyond a napkin.
  • Diagrams for architects versus diagrams for developers.
  • If we had an uber-tree, it wouldn’t be useful because you need to scope it and cut it. (Adam adds: perhaps scoping and cutting are easier than creating, if the tree isn’t overwhelming?)
  • Link attack tree to flow diagram; add the same numbered controls to the attack tree.
  • If you can be in a meeting and say nothing in the TM meeting, you’ve won!

Lastly, Jonathan did a great job of live-tweeting his own talk.