Hoff on AWS

Hoff’s blog post “Why Amazon Web Services (AWS) Is the Best Thing To Happen To Security & Why I Desperately Want It To Succeed” is great on a whole bunch of levels. If you haven’t read it, go do that.

The first thing I appreciated is that he directly confronts the possibility of his own confirmation bias.

The next thing I liked is that he’s not looking at just the technology, but the technology situated in a set of cultural assumptions.

Then he gets to the crux: “Either we learn to walk without them or simply not move forward.”

However, at the end, I get a little concerned he gets to a quote from Werner Vogels, “There’s no excuse not to use fine grained security to make your apps secure from the start.” Now, that’s a quote, embedded in a tweet, and so I’m going to feel a little safer in raising issues, because I can honestly say that I hope there’s some additional context there.

The reason to not only rely on fine grained security is that fine-grained security is hard to get right. It’s hard to conceptualize, it’s hard to implement well, and it’s hard to test. I’d love to hear more on the context (from either Hoff, Werner, or someone who was in the talk) on what else gets embedded and built to offer up defense in depth.

Me, I’d like to see evidence: do how do apps fare with different design philosophies? Let’s say group A are apps that get built with fine grained security from the start, and group B are apps built with a firewall assumption, and group C is things developed by a team that’s been using a modern security development lifecycle for more than a year. How does A do compared to each? (Pssst! Looking at you, Jeremiah!) There’s of course other comparisons we could run, but what’s important is that we look to data, rather than the opinions of well-regarded folks.

[Update: Jeremiah responded, “please better define “modern security development lifecycle” and “fine grained security” and I’ll go find out,” which I suppose puts the ball in my court. Modern SDL is relatively easy–is the organization using either the “Microsoft SDL Optimization Model” or the BSIMM to evaluate their development activities? Fine grained security is harder for me to provide a “survey question” for. Perhaps “does your app have a security kernel that performs authorization tests?” or “does your app support a policy language to control authorization activity?”

I would love your thoughts on how to make surveyable propositions about things the survey participants should know about. ]

The Gavle Goat is Getting Ready to Burn!

The Telegraph reports that the Gavle Goat for 2012 is up, and surrounded by guards, cameras, flame retardants, and arsonists.

Emergent Chaos has reporters on the ground internet, ready to report on this holiday story of a town, a goat, and an international conspiracy of drunken arsonists. Stay tuned!

This years goat is shown in its pre-fire state. Note the pre-positioned fire extinguishers surrounding it, along with what one might describe as an altogether insufficient fence.
Gavle Goat 2012

[Update: It turns out that the goat is blogging this year. Mixed English and Swedish.]

South Carolina

It’s easy to feel sympathy for the many folks impacted by the hacking of South Carolina’s Department of Revenue. With 3.6 million taxpayer social security numbers stolen, those people are the biggest victims, and I’ll come back to them. It’s also easy to feel sympathy for the folks in IT and IT management, all the way up to the Governor. The folks in IT made a call to use Trustwave for PCI monitoring, because Trustwave offered PCI compliance. They also made the call to not use a second monitoring system. That decision may look easy to criticize, but I think it’s understandable. Having two monitoring systems means more than doubling staff workloads in responding. (You have to investigate everything, and then you have to correlate and understand discrepancies.)

At the same time, I think it’s possible to take important lessons from what we do know. Each of these is designed to be a testable claim.

Compliance doesn’t prevent hacking.

In his September letter to Haley, [State Inspector General] Maley concluded that while the systems of cabinet agencies he had finished examining could be tweaked and there was a need for a statewide uniform security policy, the agencies were basically sound and the Revenue Department’s system was the “best” among them. (“Foreign hacker steals 3.6 million Social Security numbers from state Department of Revenue“, Tim Smith, Greenville Online)

I believe the reason that compliance doesn’t prevent hacking is because the compliance systems are developed without knowledge of what really goes wrong. That is, they lack feedback loops. They lack testability. They lack any mechanism for ensuring that effort has payoff. (My favorite example is password expiration times. Precisely how much more secure are you with a 60 day expiration policy versus a 120 day policy? Is such a policy worth doubling staff effort?)

You don’t know how your compliance program differs from the SC DoR.

I’m willing to bet that 90% of my readers do not know what exactly the SC DoR did to protect their systems. You might know that it was PCI (and Trustwave as a vendor). But do you know all the details? If you don’t know the details, how can you assess if your program is equivalent, better, or worse? If you can’t do that, can you sleep soundly?

But actually, that’s a red herring. Since compliance programs often contain a bunch of wasted effort, knowing how yours lines up to theirs is less relevant than you’d think. Maybe you’re slacking on something they put a lot of time into. Good for you! Or not, maybe that was the thing that would have stopped the attacker, if only they’d done it a little more. Comparing one to one is a lot less interesting than comparing to a larger data set.

We don’t know what happened in South Carolina

Michael Hicks, the director of the Maryland Cybersecurity Center at the University of Maryland, said states needed a clearer understanding of the attack in South Carolina.

“The only way states can raise the level of vigilance,” Mr. Hicks said, “is if they really get to the bottom of what really happened in this attack.” (“
Hacking of Tax Records Has Put States on Guard
“, Robbie Brown, New York Times.)

Mr. Hicks gets a New School hammer, for nailing that one.

Lastly, I’d like to talk about the first victims. The 3.6 million taxpayers. That’s 77% of the 4.6 million people in the state. That would reasonably be the entire taxpaying population. We don’t know how much data was actually leaked. (What we know is a floor. Was it entire tax returns? Was it all the information that banks report? How much of it was shared data from the IRS?) We know that these victims are at long term risk and have only short term protection. We know that their SSNs are out there, and I haven’t heard that the Social Security Administration is offering them new ones. There’s a real, and under-discussed difference between SSN breaches and credit card breaches. Let’s not even talk about biometric breaches here.

At the end of the day, there’s a lot of victims of this breach. And while it’s easy to point fingers at the IT folks responsible, I’m starting to wonder if perhaps we’re all responsible. To the extent that few of us answer Mr. Hick’s question, to the extent that we don’t learn from one anothers’ mistakes, don’t we all make defending our systems harder? We should learn what went wrong, and we should learn that not talking about the root causes helps things go wrong in the future.

Without detracting from the crime that happened in South Carolina, there’s a bigger crime if we don’t learn from it.

“Update”: We now know a fair amount
The above was written and accidentally not posted a few weeks ago. I’d like to offer up my thanks to the decision makers in South Carolina for approving Mandiant’s release of a public and technically detailed version of their report, which is short and fascinating. I’d also like to thank the folks at Mandiant for writing in clear, understandable language about what happened. Nicely done, folks!

Control-Alt-Hack: Now available from Amazon!

Amazon now has copies of Control Alt Hack, the card game that I helped Tammy Denning and Yoshi Kohno create. Complimentary copies for academics and those who won copies at Blackhat are en route.

Control-alt-hack.jpg

From the website:

Control-Alt-Hack™ is a tabletop card game about white hat hacking, based on game mechanics by gaming powerhouse Steve Jackson Games (Munchkin and GURPS).

Age: 14+ years
Players: 3-6
Game Time: Approximately 1 hour

You and your fellow players work for Hackers, Inc.: a small, elite computer security company of ethical (a.k.a., white hat) hackers who perform security audits and provide consultation services. Their motto? “You Pay Us to Hack You.”

Your job is centered around Missions – tasks that require you to apply your hacker skills (and a bit of luck) in order to succeed. Use your Social Engineering and Network Ninja skills to break the Pacific Northwest’s power grid, or apply a bit of Hardware Hacking and Software Wizardry to convert your robotic vacuum cleaner into an interactive pet toy…no two jobs are the same. So pick up the dice, and get hacking!

Now Available: Control Alt Hack!

Amazon now has copies of Control Alt Hack, the card game that I helped Tammy Denning and Yoshi Kohno create. Complimentary copies for academics and those who won copies at Blackhat are en route.

Control-alt-hack.jpg

From the website:

Control-Alt-Hack™ is a tabletop card game about white hat hacking, based on game mechanics by gaming powerhouse Steve Jackson Games (Munchkin and GURPS).

Age: 14+ years
Players: 3-6
Game Time: Approximately 1 hour

You and your fellow players work for Hackers, Inc.: a small, elite computer security company of ethical (a.k.a., white hat) hackers who perform security audits and provide consultation services. Their motto? “You Pay Us to Hack You.”

Your job is centered around Missions – tasks that require you to apply your hacker skills (and a bit of luck) in order to succeed. Use your Social Engineering and Network Ninja skills to break the Pacific Northwest’s power grid, or apply a bit of Hardware Hacking and Software Wizardry to convert your robotic vacuum cleaner into an interactive pet toy…no two jobs are the same. So pick up the dice, and get hacking!

Email Security Myths

My buddy Curt Hopkins is writing about the Patraeus case, and asked:

I wonder, in addition to ‘it’s safe if it’s in the draft folder,’ how many
additional technically- and legally-useless bits of sympathetic magic that
people regularly use in the belief that it will save them from intrusion or
discovery, either based on the law or on technology? 

In other words, are there a bunch of ‘old wives’ tales’ you’ve seen that people
believe will magically ensure their privacy?

I think it’s a fascinating question–what are the myths of email security, and for the New School bonus round, how would we test their efficacy?

I should be clear that he’s writing for The Daily Dot, and would love our help [for his follow-up article].

[Updated with a fixed link.]

The Questions Not Asked on Passwords

So there’s a pair of stories on choosing good passwords on the New York Times. The first is (as I write this) the most emailed story on the site, “How to Devise Passwords That Drive Hackers Away.” It quotes both Paul Kocher and Jeremiah Grossman, both of whom I respect. There’s also a follow-on story, “Readers Respond: Password Hygiene and Headaches.” The latter quotes AgileBits somewhat extensively, and perhaps even ironically, given that I had to publicly disagree with them about how securely they store passwords.

These are solid stories. That people email them around is evidence that people want to do better at this. That goes against the common belief of security folks that people choose to be insecure and will choose dancing pigs over security.

But I think, for all that, there’s an important question that’s not being asked. How much help are these?

If I follow all nine elements of advice from Paul and Jeremiah, how much more secure will I be? If I’m only going to follow one, which should it be? If I take different advice, how does that compare? And are users rationally rejecting all of this as too hard?

First, we need to get a bit more specific about the problem. Is it account compromise? Is it password failings leading to account compromise? Does it include backup authentication mechanisms? I’ll assume it’s unauthorized people being able to spoof the real account holder, and think of this as ‘shared secret’ authentication, thus including secret question backup auth systems, in large part because they’re vulnerable to exactly the same threats as passwords (although the probability and effectiveness of the attacks probably differ).

There are a number of threats to shared secret authentication schemes . I think we can categorize them as:

  • Finding (the post it attack or divorcing spouses)
  • Online Attacks
  • Offline Attacks (including password leaks)
  • Phishing

Password leaks are a common problem these days, and they’re a problem because they enable offline attacks, ranging from lookups to rainbow tables to more complex cracking. But how common are they? How do they compare relative to the other classes of attacks?

So to break down the important question a bit: At what frequency do these threats lead to compromised accounts? How effective is each piece of advice at mitigating that threat? What’s the effort involved in each? Without knowing those things, how should we assess the efficacy of the advice we’re giving?

My stock answer to all questions (more breach data!) does’t really work as well here. Unlike breach disclosures, where we’re talking about IT departments, some of these questions are informed by fairly private information.

I’d be interested in hearing your thoughts, especially on how we can get data to evaluate these questions.

The "Human Action" argument is not even wrong

Several commenters on my post yesterday have put forth some form of the argument that hackers are humans, humans are unpredictable, and therefore, information security cannot have a Nate Silver.

This is a distraction, as a moment’s reflection will show.

Muggings, rapes and murders all depend on the actions of unpredictable humans, and we can, in aggregate, study them. We can see if they are rising or falling. We can debate if one or another methodology is a superior way of measuring them. (For example, should we rely on police reports or survey people and see who’s been victimized?)

Now, internet crimes are different from non-internet crimes in a couple of important ways. It is far harder to properly attribute the crimes to particular actors because the crimes are mediated by computers and networks. Another difference is people generally don’t report internet crime to the police, and the police often suggest sweeping the crime under the rug. (This may relate to the challenges and expense of investigating internet crimes.) It’s possible that there are other differences.

But no one bringing up the internet exceptionalism argument have explained why we can’t change the lack of reporting, why we couldn’t study repeated events in the aggregate, or why information security can’t have a Nate Silver.

But none of them have explained how internet crime differs from non-internet crime in ways which make it unmeasurable. My question was really intended as a provocation, to get people to think about measurability in our field. But a reasonable objection is that I am hand-waving with respect to we would have our Nate Silver do, and so I want to be a bit more specific about that.

The best exemplar of this was Martin McKeay asking “Give me an example of what you think we should be able to predict.” I think we should be able to predict the number of vulnerabilities discovered, the number of malware infections per million machines, or the odds that a web server in the top N sites will be serving up attack code. I think we should be able to discuss the odds that a given SSN has been leaked, how that impacts its (ab)use as an authenticator. I also think (see my article, “The Evolution of Information Security“) that we should be able to say that organizations that invest in defense X have fewer incidents than those who invest in Y.

[Edited to add: The reason that I didn’t want to give an example of what the Nate Silver of infosec would measure is to avoid debates in the weeds about one or the other of those things. I think what is usefully measured will surprise many people, including me. By asking why in general, I want to encourage people to think about the over-arching problems, and I hope that we’ll hear more solutions to the general problem than we did yesterday.]

Where is Information Security's Nate Silver?

So by now everyone knows that Nate Silver predicted 50 out of 50 states in the 2012 election. Michael Cosentino has a great picture:

Nate Silver results

Actually, he was one of many quants who predicted what was going to happen via meta-analysis of the data that was available. So here’s my question. Who’s making testable predictions of information security events, and doing better than a coin toss?

Effective training: Wombat's USBGuru

Many times when computers are compromised, the compromise is stealthy. Take a moment to compare that to being attacked by a lion. There, the failure to notice the lion is right there, in your face. Assuming you survive, you’re going to relive that experience, and think about what you can learn from it. But in security, you don’t have that experience to re-live. That means that your ability to form good models of the world is inhibited. Another way of saying that is that our natural learning processes are inhibited.

Wombat Security makes a set of products that are designed to help with those natural learning processes. I like these folks for a variety of reasons, including their use of games, and their data-driven approach to the world. I’d like to be clear that I have no commercial connection to Wombat, I just like what they’re doing.

Their latest product, USBGuru, is a service that allows you to quickly create learning loops for the USB in the parking lot problem. It includes a way to create a USB stick with a small program on it. That program checks the username, and reports it to Wombaat. This allows you to deliver training when the stick is inserted, or when the end user is tricked into running code. It also allows you to track when people fall for the attack, and (over time) measure if the training is having an effect.

So there’s a “teachable moment”, training, and measurement. I think that’s a really cool combination, and want to encourage folks to both check out what Wombat’s USBGuru does, and compare it to other training programs they may have in place.