What Should Training Cover?

Chris Eng said “Someone should set up a GoFundMe to send whoever wrote the hit piece on password managers to a threat modeling class.

And while it’s pretty amusing, you know, I teach threat modeling classes. I spend a lot of time crafting explicit learning goals, considering and refining instructional methods, and so when a smart fellow like Chris says this, my question is why?

Is this “threat modeling as our only hope?” That’s when we take a hard security problem and sagely say “better threat modeling.” Then we wander off. It’s even better with hindsight.

Or is there a particular thing that a student should be learning in a threat modeling class? There was a set of flaws where master passwords were accessible in memory, and thus an attacker with a debugger could get your master password and decrypt all your passwords.

I’m not going to link the hit piece because they deserve to not have your clicks, impressions, or ad displays. It asserted that these flaws mean that a password manager is no better than a text file full of your passwords.

Chris’ point is that we should not tell people that using a password manager is bad, and I agree. It’s an essential part of defending against your passwords being leaked by a third party site. An attacker who can read memory can read memory, which includes backing stores like disk; in fact, reading disk is easier than reading RAM.

So to loop this around to threat modeling, we can consider a bunch of skills or knowledge that could be delivered via training:

  1. Enumerate attacker capabilities. “An attacker who can run code as Alice can do everything Alice’s account can do.” (I am, somewhat famously, not a fan of “think like an attacker”, and while I remain skeptical of enumerating attacker motivations, this is about attacker capabilities.)
  2. Understand how attacks like spoofing can take place. Details like password stuffing and how modern brute force attacks take place are a set of facts that a student could learn.
  3. Perform multiple analyses, and compare the result. If “what can go wrong” is “someone accesses your passwords by X or Y,” what are the steps to do that? What part of the defenses are in common? Which are unique? This is a set of tasks that someone could learn.

I structure classes around the four-question frame of “what are we working on, what can go wrong, what are we going to do, did we do a good job.” I work to build up skills in each of those, show how they interact, and how they interact with other engineering work. I think asking ‘what else could that attacker do with that access’ is an interesting sub of question 2. How attacks work and a selection of real world attacks is something I’ve done for non-security audiences (it feels like review for security folks). The third, comparing between models, I don’t feel is a basic skill.

I’m curious: are there other ways in which a threat modeling class could or should help its students see that ‘password managers are no better than text files’ is bad threat modeling?

Image (model) from Flinders University, Key elements and relationships in curriculum

The Queen of the Skies and Innovation

The Seattle Times has a story today about how “50 years ago today, the first 747 took off and changed aviation.” It’s true. The 747 was a marvel of engineering and luxury. The book by Joe Sutter is a great story of engineering leadership. For an upcoming flight, I paid extra to reserve an upper deck seat before the last of the passenger-carrying Queens of the Skies retires.

And in a way, the 747 represents a pinnacle of aviation engineering advancements. It was fast, it was long range, it was comfortable. There is no arguing that today’s planes are lighter, quieter, have better air, in seat power and entertainment, but I’m still happy to be flying on one, and there are still a few left to be delivered as cargo airplanes until 2022. (You can get lost in the Wikipedia article.)

And I want to talk a little not about the amazing aircraft, but about the regulatory tradeoffs made for aircraft and for computers.

As mentioned, the 50 year old design, with a great many improvements, remains in production. Also pictured is what’s probably a 1960s era Bell Systems 500 (note the integrated handset cord). Now, if 747s crashed at the rate of computers running Windows, there wouldn’t be any left. Regulation has made aviation safe, but the rate of innovation is low. (Brad Templeton has some thoughts on this in “Tons of new ideas in aviation. Will regulation stop them?.”)

In contrast, innovation in phones, computers and networks have transformed roughly every aspect of life over the last 25 years. The iPhone has transformed phones from phones into computers full of apps.

This has security costs. It is nearly impossible to function in society without a mobile phone. Your location is tracked constantly. A vulnerability in your phone leads to compromise of astounding amounts of personal data. These security costs scale when someone finds a vulnerability. Bruce Schneier has written recently about how this all comes together and leads him to say that even bad regulation is probably better than no regulation.

iphone replaces lots of things

In some ways, we’re already accepting these controls: see “15 Controversial Apps That Were Banned From Apple’s App Store,” or “Google has ‘banned’ these 14 apps from Play Store.” Controls imposed by one of the two companies wealthy enough to compete in mobile phone operating systems are importantly different from government controls, except of course, when those companies remove apps at the behest of governments.

I don’t know how to write regulation that allows for permission-less innovation at the pace we’re used to, and balances that with security and privacy. Something’s likely to give, and we need to think about how to make the societal tradeoffs well. Does anyone?

(Lastly, speaking of that upper-deck reservation, I want to give a shout-out to TProphet’s Award Cat, who drew my attention to the aircraft type and opportunity.)

Nature and Nurture in Threat Modeling

Josh Corman opened a bit of a can of worms a day or two ago, asking on Twitter: “pls RT: who are the 3-5 best, most natural Threat Modeling minds? Esp for NonSecurity people. @adamshostack is a given.” (Thanks!)

What I normally say to this is I don’t think I’m naturally good at finding replay attacks in network protocols — my farming ancestors got no chance to exercise such talents, and so it’s a skill I acquired. Similarly, whatever leads me to be able to spot such problems doesn’t help me spot lions on the savannah or detect food that’s slightly off.

If we’re going to scale threat modeling, to be systematic and structured, we need to work from a body of knowledge that we can teach and test. We need structures like my four-question framework (what are we working on, what can go wrong, what do we do, did we do a good job), and we need structures like STRIDE and Kill Chains to help us be systematic in our approaches to discovering what can go wrong. Part of the reason the framework works is it allows us to have many ways to threat model, instead of “the one true way.”

But that’s not a sufficient answer: from Rembrandt to Da Vinci, artists of great talent appear from nowhere. And they were identified and taught. The existence of schools, with curricula and codification of knowledge is important.

Even with brilliant artists (and I have no idea how to identify them consistently), we need more people to paint walls than we need people to paint murals. We need to scale the basic skills, and as we do so we’ll learn how to identify the “naturals.”

Photo: Max Pixel.

“Fire Doesn’t Innovate” by Kip Boyle (Book Review)

I hate reviewing books by people I know, because I am a picky reader, and if you can’t say anything nice, don’t say anything at all. I also tend to hate management books, because they often substitute jargon for crisp thinking. So I am surprised, but, here I am, writing a review of Kip Boyle’s “Fire Doesn’t Innovate.”

I’m giving little away by saying the twist is that attackers do innovate, and it’s a surprisingly solid frame on which Kip hangs a readable and actionable book for executives who need to make cybersecurity decisions. And it doesn’t fall into the jargon trap either in security or management.

It is not a book for the CSO. It is a book for executives, including, but not limited, to CEOs. They need to understand why cyber risks aren’t like fire risks, they need to drive action by their company, and they don’t need, want, or have the time to be able to talk about the difference between Fancy Bear and SQL injection.

In this, it is less detailed by far than Peter Singer and Allan Friedman’s “Cybersecurity and Cyberwar.” That book is intended to act as a primer and get people ready for deeper learning. “Fire” is much more for the busy executive who needs to know what questions to act, what good answers look like, and what to tell their team to go do.

The book is organized into two major parts. Part I is basic cyber ‘hygiene’ for the exec, including actionable steps like turn on updates and backups and two factor auth. (I disagree with his blanket advice to never pay ransoms — getting your business back is probably better than losing it.) Part II is what to do. It’s organized around the NIST CyberSecurity Framework, and makes it actionable. The action is in three parts: assess, plan and execute, and do so on an annual schedule.

Part of me burns with the urge to scream “that’s too simplistic!” But I know that for a lot of executives, that’s what they need as they get started. The nuance and complexity that we can bring to their problem leads to a feeling that cyber is overwhelming and impossible. So they do nothing. There’s an important lesson and model here for those writing ‘how to be safe on the internet’ guidance, and maybe there’s a second book here for normal folks.

There’s another trap that Kip avoids, and that is the book that tells you about but doesn’t reveal the secret sauce. Those books are essentially ads for the thing the author has to sell, and the book tells you enough to get you to pick up the phone. “Fire” doesn’t do that. It lays out, specifically, here’s the questions to ask. Here’s the email to frame the project. Here’s how to interpret results. It’s a brave move, but one that I think is wise. (My threat modeling book tells you what you need to know, and people call me looking for help. The coaching, the “here’s the nugget you need,” and the comparisons all make for a good business.)

I don’t know of another book at this level. Buy it for the execs you know.

Disclosure: I bought a copy of the Kindle Edition, and Kip gave me a signed copy of the paperback. He says nice things about me in the acknowledgements.

Incentives and Multifactor Authentication

It’s well known that adoption rates for multi-factor authentication are poor. For example, “Over 90 percent of Gmail users still don’t use two-factor authentication.”

Someone was mentioning to me that there are bonuses in games. You get access to special rooms in Star Wars Old Republic. There’s a special emote in Fortnite. (Above)

How well do these incentives work? Are there numbers out there?

This is a really interesting post* about how many simple solutions to border security fail in the real world.

  • Not everywhere has the infrastructure necessary to upload large datasets to the cloud
  • Most cloud providers are in not-great jurisdictions for some threat models.
  • Lying to border authorities, even by omission, ends badly.

Fact is, the majority of “but why don’t you just…” solutions in this space either require lying, reliance on infrastructure that may be non-existent or jurisdictionally compromised, or fails openly.

The “post” was originally a long Twitter thread, which is archived, for the moment, at ThreadReader App, which is a far, far better UI than Twitter.

Navigation