Category: education

What Should Training Cover?

Chris Eng said “Someone should set up a GoFundMe to send whoever wrote the hit piece on password managers to a threat modeling class.

And while it’s pretty amusing, you know, I teach threat modeling classes. I spend a lot of time crafting explicit learning goals, considering and refining instructional methods, and so when a smart fellow like Chris says this, my question is why?

Is this “threat modeling as our only hope?” That’s when we take a hard security problem and sagely say “better threat modeling.” Then we wander off. It’s even better with hindsight.

Or is there a particular thing that a student should be learning in a threat modeling class? There was a set of flaws where master passwords were accessible in memory, and thus an attacker with a debugger could get your master password and decrypt all your passwords.

I’m not going to link the hit piece because they deserve to not have your clicks, impressions, or ad displays. It asserted that these flaws mean that a password manager is no better than a text file full of your passwords.

Chris’ point is that we should not tell people that using a password manager is bad, and I agree. It’s an essential part of defending against your passwords being leaked by a third party site. An attacker who can read memory can read memory, which includes backing stores like disk; in fact, reading disk is easier than reading RAM.

So to loop this around to threat modeling, we can consider a bunch of skills or knowledge that could be delivered via training:

  1. Enumerate attacker capabilities. “An attacker who can run code as Alice can do everything Alice’s account can do.” (I am, somewhat famously, not a fan of “think like an attacker”, and while I remain skeptical of enumerating attacker motivations, this is about attacker capabilities.)
  2. Understand how attacks like spoofing can take place. Details like password stuffing and how modern brute force attacks take place are a set of facts that a student could learn.
  3. Perform multiple analyses, and compare the result. If “what can go wrong” is “someone accesses your passwords by X or Y,” what are the steps to do that? What part of the defenses are in common? Which are unique? This is a set of tasks that someone could learn.

I structure classes around the four-question frame of “what are we working on, what can go wrong, what are we going to do, did we do a good job.” I work to build up skills in each of those, show how they interact, and how they interact with other engineering work. I think asking ‘what else could that attacker do with that access’ is an interesting sub of question 2. How attacks work and a selection of real world attacks is something I’ve done for non-security audiences (it feels like review for security folks). The third, comparing between models, I don’t feel is a basic skill.

I’m curious: are there other ways in which a threat modeling class could or should help its students see that ‘password managers are no better than text files’ is bad threat modeling?

Image (model) from Flinders University, Key elements and relationships in curriculum

Pivots and Payloads

SANS has announced a new boardgame, “Pivots and Payloads,” that “takes you through pen test methodology, tactics, and tools with many possible setbacks that defenders can utilize to hinder forward progress for a pen tester or attacker. The game helps you learn while you play. It’s also a great way to showcase to others what pen testers do and how they do it.”

If you register for their webinar, which is on Wednesday the 19th, they’ll send you some posters versions that convert to boardgames.

If you’re interested in serious games for security, I maintain a list at

The Worst User Experience In Computer Security?

I’d like to nominate Xfinity’s “walled garden” for the worst user experience in computer security.

For those not familiar, Xfinity has a “feature” called “Constant Guard” in which they monitor your internet for (I believe) DNS and IP connections for known botnet command and control services. When they think you have a bot, you see warnings, which are inserted into your web browsing via a MITM attack.

Recently, I was visiting family, and there was, shock of all shocks, an infected machine. So I pulled out my handy-dandy FixMeStick*, and let it do its thing. It found and removed a pile of cruft. And then I went to browse the web, and still saw the warnings that the computer was infected. This is the very definition of a wicked environment, one in which feedback makes it hard to understand what’s happening. (A concept that Jay Jacobs has explicitly tied to infosec.)

So I manually removed Java, and spent time reading the long list of programs that start at boot (via Autoruns, which Xfinity links to if you can find the link), re-installed Firefox, and did a stack of other cleaning work. (Friends at browser makers: it would be nice if there was a way to forcibly remove plugins, rather than just disable them).

As someone who’s spent a great deal of time understanding malware propagation methods, I was unable to decide if my work was effective. I was unable to determine the state of the machine, because I was getting contradictory signals.

My family (including someone who’d been a professional Windows systems administrator) had spent a lot of time trying to clean that machine and get it out of the walled garden. The tools Xfinity provided did not work. They did not clean the malware from the system. Worse, the feedback Xfinity themselves provided was unclear and ambiguous (in particular, the malware in question was never named, nor was the date of the last observation available). There was no way to ask for a new scan of the machine. That may make some narrow technical sense, given the nature of how they’re doing detection, but that does not matter. The issue here is that a person of normal skill cannot follow their advice and clean the machine. Even a person with some skill may be unable to see if their work is effective. (I spent a good hour reading through what runs at startup via Autoruns).

I understand the goals of these walled garden programs. But the folks running them need to invest in talking to the people in the gardens, and understand why they’re not getting out. There’s good reasons for those failures, and we need to study the failures and address those reasons.

Until then, I’m interested in hearing if there’s a worse user experience in computer security than being told your recently cleaned machine is still dirty.

* Disclaimer: FixMeStick was founded by friends who I met at Zero-Knowledge Systems, and I think that they refunded my order. So I may be biased.

Email Security Myths

My buddy Curt Hopkins is writing about the Patraeus case, and asked:

I wonder, in addition to ‘it’s safe if it’s in the draft folder,’ how many
additional technically- and legally-useless bits of sympathetic magic that
people regularly use in the belief that it will save them from intrusion or
discovery, either based on the law or on technology? 

In other words, are there a bunch of ‘old wives’ tales’ you’ve seen that people
believe will magically ensure their privacy?

I think it’s a fascinating question–what are the myths of email security, and for the New School bonus round, how would we test their efficacy?

I should be clear that he’s writing for The Daily Dot, and would love our help [for his follow-up article].

[Updated with a fixed link.]

Effective training: Wombat's USBGuru

Many times when computers are compromised, the compromise is stealthy. Take a moment to compare that to being attacked by a lion. There, the failure to notice the lion is right there, in your face. Assuming you survive, you’re going to relive that experience, and think about what you can learn from it. But in security, you don’t have that experience to re-live. That means that your ability to form good models of the world is inhibited. Another way of saying that is that our natural learning processes are inhibited.

Wombat Security makes a set of products that are designed to help with those natural learning processes. I like these folks for a variety of reasons, including their use of games, and their data-driven approach to the world. I’d like to be clear that I have no commercial connection to Wombat, I just like what they’re doing.

Their latest product, USBGuru, is a service that allows you to quickly create learning loops for the USB in the parking lot problem. It includes a way to create a USB stick with a small program on it. That program checks the username, and reports it to Wombaat. This allows you to deliver training when the stick is inserted, or when the end user is tricked into running code. It also allows you to track when people fall for the attack, and (over time) measure if the training is having an effect.

So there’s a “teachable moment”, training, and measurement. I think that’s a really cool combination, and want to encourage folks to both check out what Wombat’s USBGuru does, and compare it to other training programs they may have in place.

How Harvey Mudd Brings Women into CS

Back in October, I posted on “Maria Klawe on increasing Women in Technology.” Now the New York Times has a story, “Giving Women The Access Code:”

“Most of the female students were unwilling to go on in computer science because of the stereotypes they had grown up with,” said Zachary Dodds, a computer scientist at Mudd. “We realized we were helping perpetuate that by teaching such a standard course.”

To reduce the intimidation factor, the course was divided into two sections — “gold,” for those with no prior experience, and “black” for everyone else. Java, a notoriously opaque programming language, was replaced by a more accessible language called Python. And the focus of the course changed to computational approaches to solving problems across science.

“We realized that we needed to show students computer science is not all about programming,” said Ran Libeskind-Hadas, chairman of the department. “It has intellectual depth and connections to other disciplines.”

Well, sometimes computer science has depth and connections to reality. Other times we get wrapped around some little technical nit, and lose sight of the larger picture. Or sometimes, we just talk about crypto and key lengths.

If we want more diversity in computer security, we have to look around, see what’s working and take lessons from it. Otherwise, we’re going to stay on the hamster wheels. There’s excellent evidence that more diversity helps you solve certain classes of problems better. (See, for example, “Scott Page’s The Difference.)