Friday Star Wars: Trek and CISSP

Larry Greenblat is releasing a series of videos titled “Passing the CISSP Exam with the help of Spock & Kirk.” I, of course, love this, because using stories to help people learn and remember is awesome, and it reminds me of my own “The Security Principles of Saltzer and Schroeder, illustrated with Star Wars.” Also, my thoughts on Star Wars vs Star Trek for these sorts of things.

The Worst User Experience In Computer Security?

I’d like to nominate Xfinity’s “walled garden” for the worst user experience in computer security.

For those not familiar, Xfinity has a “feature” called “Constant Guard” in which they monitor your internet for (I believe) DNS and IP connections for known botnet command and control services. When they think you have a bot, you see warnings, which are inserted into your web browsing via a MITM attack.

Recently, I was visiting family, and there was, shock of all shocks, an infected machine. So I pulled out my handy-dandy FixMeStick*, and let it do its thing. It found and removed a pile of cruft. And then I went to browse the web, and still saw the warnings that the computer was infected. This is the very definition of a wicked environment, one in which feedback makes it hard to understand what’s happening. (A concept that Jay Jacobs has explicitly tied to infosec.)

So I manually removed Java, and spent time reading the long list of programs that start at boot (via Autoruns, which Xfinity links to if you can find the link), re-installed Firefox, and did a stack of other cleaning work. (Friends at browser makers: it would be nice if there was a way to forcibly remove plugins, rather than just disable them).

As someone who’s spent a great deal of time understanding malware propagation methods, I was unable to decide if my work was effective. I was unable to determine the state of the machine, because I was getting contradictory signals.

My family (including someone who’d been a professional Windows systems administrator) had spent a lot of time trying to clean that machine and get it out of the walled garden. The tools Xfinity provided did not work. They did not clean the malware from the system. Worse, the feedback Xfinity themselves provided was unclear and ambiguous (in particular, the malware in question was never named, nor was the date of the last observation available). There was no way to ask for a new scan of the machine. That may make some narrow technical sense, given the nature of how they’re doing detection, but that does not matter. The issue here is that a person of normal skill cannot follow their advice and clean the machine. Even a person with some skill may be unable to see if their work is effective. (I spent a good hour reading through what runs at startup via Autoruns).

I understand the goals of these walled garden programs. But the folks running them need to invest in talking to the people in the gardens, and understand why they’re not getting out. There’s good reasons for those failures, and we need to study the failures and address those reasons.

Until then, I’m interested in hearing if there’s a worse user experience in computer security than being told your recently cleaned machine is still dirty.

* Disclaimer: FixMeStick was founded by friends who I met at Zero-Knowledge Systems, and I think that they refunded my order. So I may be biased.

Email Security Myths

My buddy Curt Hopkins is writing about the Patraeus case, and asked:

I wonder, in addition to ‘it’s safe if it’s in the draft folder,’ how many
additional technically- and legally-useless bits of sympathetic magic that
people regularly use in the belief that it will save them from intrusion or
discovery, either based on the law or on technology? 

In other words, are there a bunch of ‘old wives’ tales’ you’ve seen that people
believe will magically ensure their privacy?

I think it’s a fascinating question–what are the myths of email security, and for the New School bonus round, how would we test their efficacy?

I should be clear that he’s writing for The Daily Dot, and would love our help [for his follow-up article].

[Updated with a fixed link.]

Effective training: Wombat's USBGuru

Many times when computers are compromised, the compromise is stealthy. Take a moment to compare that to being attacked by a lion. There, the failure to notice the lion is right there, in your face. Assuming you survive, you’re going to relive that experience, and think about what you can learn from it. But in security, you don’t have that experience to re-live. That means that your ability to form good models of the world is inhibited. Another way of saying that is that our natural learning processes are inhibited.

Wombat Security makes a set of products that are designed to help with those natural learning processes. I like these folks for a variety of reasons, including their use of games, and their data-driven approach to the world. I’d like to be clear that I have no commercial connection to Wombat, I just like what they’re doing.

Their latest product, USBGuru, is a service that allows you to quickly create learning loops for the USB in the parking lot problem. It includes a way to create a USB stick with a small program on it. That program checks the username, and reports it to Wombaat. This allows you to deliver training when the stick is inserted, or when the end user is tricked into running code. It also allows you to track when people fall for the attack, and (over time) measure if the training is having an effect.

So there’s a “teachable moment”, training, and measurement. I think that’s a really cool combination, and want to encourage folks to both check out what Wombat’s USBGuru does, and compare it to other training programs they may have in place.

How Harvey Mudd Brings Women into CS

Back in October, I posted on “Maria Klawe on increasing Women in Technology.” Now the New York Times has a story, “Giving Women The Access Code:”

“Most of the female students were unwilling to go on in computer science because of the stereotypes they had grown up with,” said Zachary Dodds, a computer scientist at Mudd. “We realized we were helping perpetuate that by teaching such a standard course.”

To reduce the intimidation factor, the course was divided into two sections — “gold,” for those with no prior experience, and “black” for everyone else. Java, a notoriously opaque programming language, was replaced by a more accessible language called Python. And the focus of the course changed to computational approaches to solving problems across science.

“We realized that we needed to show students computer science is not all about programming,” said Ran Libeskind-Hadas, chairman of the department. “It has intellectual depth and connections to other disciplines.”

Well, sometimes computer science has depth and connections to reality. Other times we get wrapped around some little technical nit, and lose sight of the larger picture. Or sometimes, we just talk about crypto and key lengths.

If we want more diversity in computer security, we have to look around, see what’s working and take lessons from it. Otherwise, we’re going to stay on the hamster wheels. There’s excellent evidence that more diversity helps you solve certain classes of problems better. (See, for example, “Scott Page’s The Difference.)