“Cybersecurity is not very important” is a new paper by the very smart Andrew Odlyzko. I do not agree with everything he says, but it’s worth reading and pondering if and why you disagree with it. I think I agree with it more than I disagree.
RSA has posted a video of my talk, “Threat Modeling in 2019”
I’ve signed on to Access Now’s letter to the Indian Ministry of Electronics and Information Technology, asking the Government of India to withdraw the draft amendments proposed to the Information Technology (Intermediary Guidelines) Rules.
As they say in their press release:
Today’s letter, signed by an international coalition of 31 organizations and individuals, explains how the proposed amendments threaten fundamental rights and the space for a free internet, while not addressing the problems that the Ministry aims to resolve. A key concern is the requirement for intermediaries to “enable tracing out of such originator” of content that an intermediary hosts, which could lead to demands that providers weaken the security features of their products and services. This threat to privacy would in turn endanger free expression.
There’s only a few times to use a pie chart, but to help you celebrate, there’s how to keep track of your intake:
The fine folks at AppSecCali have posted videos, including my talks, A Seat At The Table, and Game On! Adding Privacy to Threat Modeling – Adam Shostack & Mark Vinkovits
Bruce Schneier and I wrote an article on Facebook’s privacy changes: “A New Privacy Constitution for Facebook.”
I’m quite happy to say that my next Linkedin Learning course has launched! This one is all about spoofing.
It’s titled “Threat Modeling: Spoofing in Depth.” It’s free until at least a week after RSA.
Also, I’m exploring the idea that security professionals lack a shared body of knowledge about attacks, and that an entertaining and engaging presentation of such a BoK could be a useful contribution. A way to test this is to ask how often you hear attacks discussed at a level of abstraction that’s puts the attacks into a category other than “OMG the sky is falling, patch now.” Another way to test is to watch for fluidity in moving from one type of spoofing attack to another.
Part of my goal of the course is to help people see that attacks cluster and have similarities, and that STRIDE can act as a framework for chunking knowledge.
At RSA, I’ll be speaking 3 times at the conference, and once at a private event for Continuum:
- “2028 Future State: Long Live the Firewall?” with Jennifer Minella, Harry Sverdlove and Marcus Ranum. March 5 | 1:00 PM – 1:50 PM | Moscone West 3001
- Threat modeling brunch with IriusRisk March 6 | 10 – 11 AM | See site for registration
- How to Measure Ecosystem Impacts with Jay Jacobs. March 7 | 1:30 PM – 2:20 PM | Moscone West 2011
- Threat Modeling in 2019. March 8 | 8:30 AM – 9:20 AM | Moscone South 205
And while it’s pretty amusing, you know, I teach threat modeling classes. I spend a lot of time crafting explicit learning goals, considering and refining instructional methods, and so when a smart fellow like Chris says this, my question is why?
Is this “threat modeling as our only hope?” That’s when we take a hard security problem and sagely say “better threat modeling.” Then we wander off. It’s even better with hindsight.
Or is there a particular thing that a student should be learning in a threat modeling class? There was a set of flaws where master passwords were accessible in memory, and thus an attacker with a debugger could get your master password and decrypt all your passwords.
I’m not going to link the hit piece because they deserve to not have your clicks, impressions, or ad displays. It asserted that these flaws mean that a password manager is no better than a text file full of your passwords.
Chris’ point is that we should not tell people that using a password manager is bad, and I agree. It’s an essential part of defending against your passwords being leaked by a third party site. An attacker who can read memory can read memory, which includes backing stores like disk; in fact, reading disk is easier than reading RAM.
So to loop this around to threat modeling, we can consider a bunch of skills or knowledge that could be delivered via training:
- Enumerate attacker capabilities. “An attacker who can run code as Alice can do everything Alice’s account can do.” (I am, somewhat famously, not a fan of “think like an attacker”, and while I remain skeptical of enumerating attacker motivations, this is about attacker capabilities.)
- Understand how attacks like spoofing can take place. Details like password stuffing and how modern brute force attacks take place are a set of facts that a student could learn.
- Perform multiple analyses, and compare the result. If “what can go wrong” is “someone accesses your passwords by X or Y,” what are the steps to do that? What part of the defenses are in common? Which are unique? This is a set of tasks that someone could learn.
I structure classes around the four-question frame of “what are we working on, what can go wrong, what are we going to do, did we do a good job.” I work to build up skills in each of those, show how they interact, and how they interact with other engineering work. I think asking ‘what else could that attacker do with that access’ is an interesting sub of question 2. How attacks work and a selection of real world attacks is something I’ve done for non-security audiences (it feels like review for security folks). The third, comparing between models, I don’t feel is a basic skill.
I’m curious: are there other ways in which a threat modeling class could or should help its students see that ‘password managers are no better than text files’ is bad threat modeling?
Image (model) from Flinders University, Key elements and relationships in curriculum
“Making the Case for a Cybersecurity Moon Shot” is my latest, over at Dark Reading.
“There’s been a lot of talk lately of a cybersecurity moon shot. Unfortunately, the model seems to be the war on cancer, not the Apollo program. Both are worthwhile, but they are meaningfully different.”