Security 101: Show Your List!

Lately I’ve noted a lot of people quoted in the media after breaches saying “X was Security 101. I can’t believe they didn’t do X!” For example, “I can’t believe that LinkedIn wasn’t salting passwords! That’s security 101!”

Now, I’m unsure if that’s “security 101” or not. I think security 101 for passwords is “don’t store them in plaintext”, or “don’t store them with a crypto algorithm you designed”. Ten years ago, it would have included salting, but with the speed of GPU crackers, maybe it doesn’t anymore. A good library would probably still include it. Maybe LinkedIn was spending more on preventing XSS or SQL injection, and that pushed password storage off their list. Maybe that’s right, maybe it’s wrong. To tell you the truth, I don’t want to argue about it.

What I want to argue about is the backwards looking nature of these statements. I want to argue because I did some searching, and not one of those folks I searched for has committed to a list of security 101, or what are the “simple controls” every business should have.

This is important because otherwise, hindsight is 20/20. It’s easy to say in hindsight that an organization should have done A or B or C. It’s harder to offer up a complete list in advance, and harder yet to justify the budget required to deploy and operate it.

So I’m going to make three requests for 2015:

  • If you’re an expert (or even play one on the internet), and if you want to say “X is Security 101,” please publish your full list of what’s “101.”
  • If you’re a reporter and someone tells you “X is security 101” please ask them for their list.
  • Finally, if you’re someone who wants to see security improve, and you hear claims about “101”, please ask for the list.

Oh, and since it’s sauce for the gander, here’s my list for individuals:

  • Stay up to date–get most of your machines on the latest revisions of software and get patches for security issues installed, especially in your browser and AV software.
  • Use a firewall that blocks most inbound traffic.
  • Ensure you have a working backup of your data.

(There are complex arguments about AV software, and a lack of agreements about how to effectively test it. Do you need it? Will it block the wildlist? There’s nuance, but that nuance doesn’t play into a 101 list. I won’t be telling people not to use AV software.)

*By “lately,” I meant in 2012, when I wrote this, right after the Linkedin breach. But I recently discovered that I hadn’t posted.

[Update: I’m happy to see Ira Winkler and Araceli Treu Gomes took up the idea in “The Irari rules for declaring a cyberattack ‘sophisticated’.” Good for them!]

How to Ask Good Questions at RSA

So this week is RSA, and I wanted to offer up some advice on how to engage. I’ve already posted my “BlackHat Best Practices/Survival kit.

First, if you want to ask great questions, pay attention. There are things more annoying than a question that was answered while the questioner was tweeting, but you still don’t want to be that person.

Second, if you want to ask a good question, ask a question that you think others will want to hear answered. If your question is narrow, go up to the speaker afterwards.

Now, there are some generic best practice questions that I love to ask, and want to encourage you to ask.

  • You claimed “X”, but didn’t explain why. Could you briefly cover your methodology and data for that claim?
  • You said “X” is a best practice. Can you cover what practices you would cut to ensure there’s resources available to do “X”?
  • You said “if you get breached, you’ll go out of business. Last year, 2600 companies announced data breaches. How many of them are out of business?”
  • You said that “X” dramatically increased your organization’s security. Since we live in an era of ‘assume breach’, can I assume that your organization is now committed to publishing details of any breaches that happen despite X?
      I’m sure there’s other good questions, please share your favorites, and I’ll try for a new post tomorrow.

Is there "Room for Debate?" in Breach Disclosure?

The New York Times has a “Room for Debate” on “Should Companies Tell Us When They Get Hacked?

It currently has 4 entries, 3 of which are dramatically in favor of more disclosure. I’m personally fond of Lee Tien’s “
We Need Better Notification Laws
.”

My personal preference is of course (ahem) fascinating to you, since you’re reading this blog, but more seriously, it’s not what I expect anyone else to find interesting.

What’s interesting to me is that the only person who they could find to say no is Alexander Tabb, whose bio states that he “is a partner at TABB Group, a capital markets research and consulting firm.” I don’t want to insult Mr Tabb, and so found a fuller bio here, which includes “Mr. Tabb is an expert in the field of international affairs, with specialization in the developing world, crisis management, international security, supply chain security and travel safety and security. He joined Tabb Group in October 2004. [From 2001 to 2004] Mr. Tabb served as an Associate Managing Director of Security Services Group of Kroll Inc., the international risk consulting company.”

I find it fascinating that someone of his background is the naysayer. Perhaps the Times was unable to find anyone practicing in information security to claim that companies should not tell us when they’ve been hacked?

The High Price of the Silence of Cyberwar

A little ways back, I was arguing [discussing cyberwar] with thegrugq, who said “[Cyberwar] by it’s very nature is defined by acts of espionage, where all sides are motivated to keep incidents secret.”

I don’t agree that all sides are obviously motivated to keep incidents secret, and I think that it’s worth asking, is there a strategic advantage to a policy of disclosure?

Before we get there, there’s a somewhat obvious objection that we should discuss, which is that the defender instantly revealing all the attacks they’ve detected is a great advantage for the attacker. So when I talk about disclosing attacks, I want to include some subtlety, where the defender is taking two steps to counter that advantage. The first step is to randomly select some portion of attacks (say, 20-50%) to keep secret, and second, randomly delay disclosure until sometime after cleanup.

With those two tactical considerations in place, a defender can gain several advantages by revealing attacks.

The first advantage is deterrence. If an defender regularly reveals attacks which have taken place, and reveals the attack tools, the domains, the IP addresses and the software installed, then the attacker’s cost of attacking that defender (compared to other defenders) is higher, because those tools will be compromised. That has a deterring effect.

The second advantage is credibility. In today’s debate about cyberwar, all information disclosed seems to come with an agenda. Everyone evaluating the information is forced to look not only at the information, but the motivation for revealing that information. Worse, they can question if the information not revealed is shaped differently from what is revealed. A defender who reveals information regularly and in accordance with a policy will gain credibility, and with it, the ability to better influence the debate.

There’s a third advantage, which is that of improving the information available to all defenders, but that only accrues to the revealer to the extent that others don’t free ride. Since I’m looking to the advantages that accrue to the defender, we can’t count it. However, to the extent that a government cares about the public good, this should weigh in their decision making process.

The United States, like many liberal democracies, has a long history of disclosing a good deal of information about our weaponry and strategies. The debates over nuclear weapons were public policy debates in which we knew how many weapons each side had, how big they were, etc. What’s more, the key thinking about deterrence and mutually assured destruction which informed US policy was all public. That approach helped us survive a 50 year cold war, with weapons of unimaginable destructive potential.

Simultaneously, secrecy around what’s happening pushes the public policy discussions towards looking like ‘he said, she said,’ rather than discussions with facts involved.

Advocates of keeping attacks in which they’ve been victimized a secret, keeping doctrines secret, or keeping strategic thinking secret need to move beyond the assumption that everything about cyberwar is secret, and start justifying the secrecy of anything beyond current operations.

[As thegruq doesn’t have a blog, I’ve posted his response “http://newschoolsecurity.com/2013/01/on-disclosure-of-intrusion-events-in-a-cyberwar/“]

Infosec Lessons from Mario Batali's Kitchen

There was a story recently on NPR about kitchen waste, “No Simple Recipe For Weighing Food Waste At Mario Batali’s Lupa.” Now, normally, you’d think that a story on kitchen waste has nothing to do with information security, and you’d be right. But as I half listened to the story, I realized that it in fact was a story about a fellow, Andrew Shakman, and his quest to change business processes to address environmental priorities.

I also realized that I’ve heard him in meetings. Ok, it wasn’t Andrew, and the subject wasn’t food waste, but I think that makes the story all the more powerful for information security, because it’s easier to look at an apparently disconnected story, understand it, and then bring the lessons on home:

“Once we begin reducing food waste, we are spending less money on food because we’re not buying food to waste it; we’re spending less money on labor; we’re spending less money on energy to keep that food cold and heat it up; we’re spending less on waste disposal,” says Shakman.

That’s right! Managing food waste doesn’t have to be a tax, it can be a profit center, and that’s awesome. Back to the story:

Lupa’s Chef di Cuisine Cruz Goler spent a couple of months working with the system. But he ran into some problems. After the first week, some of his staff just stopped weighing the food. But Goler says he didn’t want to “break their chops about some sort of vegetable scrap that doesn’t really mean anything.” Shakman believes those scraps do mean something when they add up over time. He says it’s just a matter of making the tracking a priority, even when a restaurant is really busy. “When we get busy, we don’t stop washing our hands; when we get busy, we don’t cut corners in quality on the plate,” says Shakman.

That’s right, too! We can declare priorities, and if only our thing is declared a priority, it’ll win! What’s more, what’s a priority is a matter of executive sponsorship. The fact that the health department will be upset if you don’t wash your hands — that’s just compliance. Imperfectly plated food? Look, people are at a restaurant to eat, not admire the food, and that plate’s gonna be all smudged up in just a minute. In other words, those priorities are driven by either the customer or an external party. No argument that any internal or consulting party brings in will match those. They’re priority 1, and that’s a small set of requirements.

But for me, the most heartbreaking quote came after the chef decided not to use the system in that restaurant:

Despite the failure of LeanPath in the Lupa kitchen, Shakman is still convinced his system can save restaurants money. But he’s learned that the battle against food waste, like so many battles people fight, has to start with winning hearts and minds.

It’s true, if we just win hearts and minds, people will re-prioritize their tasks. To an extent. But perhaps the issue is that to win hearts and minds, we sometimes need to listen to the objections, and find ways to address them. For example, if onion skins aren’t even used in stock, maybe those can just be dumped on a normal day. Maybe there’s a way to modify the system to only weigh scrap on 1 day out of 7, so that the cost of the system is lessened. I talked about similar issues in security in my “Engineers Are People, Too” talk, and the Elevation of Privilege game is an example of how to make a set of threat modeling tasks more attractive.

Lastly, I want to be clear that I’m using Mr. Shakman and his company as a strawman to critique behaviors I see in information security. Mr. Shakman is probably a great guy and dedicated entreprenuer who’s been taken way out of context in that story. From the company’s website and blog they have some happy customers. I mean them no harm, think what they’re trying to do is an awesome goal, and I wish them the best of luck.

The Boy Who Cried Cyber Pearl Harbor

There is, yet again, someone in the news talking about a cyber Pearl Harbor.

I wanted to offer a few points of perspective.

First, on December 6th, 1941, the United States was at peace. There were worries about the future, but no belief that a major attack was imminent, and certainly not a sneak attack. Today, it’s very clear that successful attacks are a regular and effectively accepted part of the landscape. (I’ll come back to the accepted part.)

Second, insanity. One excellent definition of insanity is doing the same thing over and over again and expecting different results. Enough said? Maybe not. People have been using this same metaphor since at least 1991 (thanks to @mattdevost for the link). So those Zeros have got to be the slowest cyber-planes in the history of the cybers. So it’s insanity to keep using the metaphor, but it’s also insanity to expect that the same approaches that have brought us where we are.

Prime amongst the ideas that we need to jettison is that we can get better in an ivory tower, or a secret fusion center where top men are thinking hard about the problem.

We need to learn from each other’s experiences and mistakes. If we want to avoid having the Tacoma Narrows bridge fall again we need to understand what went wrong. When our understanding is based on secret analysis by top men, we’re forced to hope that they got the analysis right. When they show their work, we can assess it. When they talk about a specific bridge, we can bring additional knowledge and perspective to bear.

For twenty years, we’ve been hearing about these problems in these old school ways, and we’re still hearing about the same problems and the same risks.

We’ve been seeing systems compromised, and accepting it.

We need to stop talking about Pearl Harbor, and start talking about Aurora and its other victims. We need to stop talking about Pearl Harbor, and start talking about RSA, not in terms of cyber-ninjas, but about social engineering, the difficulties of upgrading or the value of defense in depth. We need to stop talking about Pearl Harbor and start talking about Buckshot Yankee, and perhaps why all those SIPRnet systems go unpatched. We need to stop talking about the Gawker breach passwords, and start talking about how they got out.

We need to stop talking in metaphors and start talking specifics.

PS: If you want me to go first, ok. Here ya go.

Your career is over after a breach? Another Myth, Busted!

I’m a big fan of learning from our experiences around breaches. Claims like “your stock will fall”, or “your customers will flee” are shown to be false by statistical analysis, and I expect we’d see the same if we looked at people losing their jobs over breaches. (We could do this, for example, via LinkedIn and DatalossDB.)

There’s another myth that’s out there about what happens after a breach, and that is that the breach destroys the career of the CISO and the entire security department. And so I’m pleased today to be able to talk about that myth. Frequently, when I bring up breaches and lessons we can learn, people bring up ChoicePoint as the ultimate counterexample. Now, ChoicePoint is interesting for all sorts of reasons, but from a stock price perspective, they’re a statistical outlier. And so I’m extra pleased to be able to discuss today’s lesson with ChoicePoint as our data point.

Last week, former ChoicePoint CISO Rich Baich was [named Wells Fargo’s] first chief information security officer. Congratulations, Rich!

Now, you might accuse me of substituting anecdote for data and analysis, and you’d be sort of right. One data point doesn’t plot a line. But not all science requires plotting a line. Oftentimes, a good experiment shows us things by being impossible under the standard model. Dropping things from the tower of Pisa shows that objects fall at the same speed, regardless of weight.

So Wells Fargo’s announcement is interesting because it provides a data point that invalidates the hypothesis “If you have a breach, your career is over.” Now, some people, less clever than you, dear reader, might try to retreat to a weaker claim “If you have a breach, your career may be over.” Of course, that “may” destroys any predictive value that the claim may have, and in fact, the claim “If [X], your career may be over,” is equally true, and equally useless, and that’s why you’re not going there.

In other words, if a breach always destroys a career, shouldn’t Rich be flipping burgers?

There’s three more variant hypotheses we can talk about:

  • “If you have a breach, your career will require a long period of rehabilitation.” But Rich was leading the “Global Cyber Threat and Vulnerability Management practice” for Deloitte and Touche, which is not exactly a backwater.
  • “If you have a breach, you will be fired for it.” That one is a bit trickier. I’m certainly not going to assert that no one has ever been fired for a breach happening. But it’s also clearly untrue. The weaker version is “if you have a breach, you may be fired for it”, and again, that’s not useful or interesting.
  • “If you have a breach, it will improve your career.” That’s also obviously false, and the weaker version isn’t faslifiable. But perhaps the lessons learned, focus, and publicity around a breach can make it helpful to your career. It’s not obviously dumber than the opposite claims.

So overall, what is useful and interesting is that yet another myth around breaches turns out to be false. So let’s start saying a bit more about what went wrong, and learning more about what’s going wrong.

Finally, again, congratulations and good luck to Rich in his new role!

Aitel on Social Engineering

Yesterday, Dave Aitel wrote a fascinating article “Why you shouldn’t train employees for security awareness,” arguing that money spent on training employees about awareness is wasted.

While I don’t agree with everything he wrote, I submit that your opinion on this (and mine) are irrelevant. The key question is “Is money spent on security awareness a good way to spend your security budget?”

The key is we have data now. As someone comments:

[Y]ou somewhat missed the point of the phishing awareness program. I do agree that annual training is good at communicating policy – but horrible at raising a persistent level of awareness. Over the last 5 years, our studies have shown that annual IA training does very little to improve the awareness of users against social engineering based attacks (you got that one right). However we have shown a significant improvement in awareness as a result of running the phishing exercises (Carronade). These exercises sent fake phishing emails to our cadet population every few weeks (depends on the study). We demonstrated a very consistent reduction to an under 5% “failure” after two iterations. This impact is short lived though. After a couple months of not running the study, the results creep back up to 40% failure.

So, is a reduction in phishing failure to under 5% a good investment? Are there other investments that bring your failure rates lower?

As I pointed out in “The Evolution of Information Security” (context), we can get lots more data with almost zero investment.

If someone (say, GAO) obtains data on US Government department training programs, and cross-correlates that with incidents being reported to US-CERT, then we can assess the efficacy of those training programs

Opinions, including mine, Dave’s, and yours, just ain’t relevant in the face of data. We can answer well-framed questions of “which of these programs best improves security” and “is that improvement superior to other investments?”

The truth, in this instance, won’t set us free. Dave Mortman pointed out that a number of regulations may require awareness training as a best practice. But if we get data, we can, if needed, address the regulations.

If you’re outraged by Dave’s claims, prove him wrong. If you’re outraged by the need to spend money on social engineering, prove it’s a waste.

Put the energy to better use than flaming over a hypothesis.

Feynman on Cargo Cult Science

On Twitter, Phil Venables said “More new school thinking from the Feynman archives. Listen to this while thinking of InfoSec.”

During the Middle Ages there were all kinds of crazy ideas, such
as that a piece of rhinoceros horn would increase potency. Then a
method was discovered for separating the ideas–which was to try
one to see if it worked, and if it didn’t work, to eliminate it.
This method became organized, of course, into science. And it
developed very well, so that we are now in the scientific age. It
is such a scientific age, in fact that we have difficulty in
understanding how witch doctors could ever have existed, when
nothing that they proposed ever really worked–or very little of
it did.

But even today I meet lots of people who sooner or later get me
into a conversation about UFOS, or astrology, or some form of
mysticism, expanded consciousness, new types of awareness, ESP, and
so forth. And I’ve concluded that it’s not a scientific world.

Details that could throw doubt on your interpretation must be
given, if you know them. You must do the best you can–if you know
anything at all wrong, or possibly wrong–to explain it. If you
make a theory, for example, and advertise it, or put it out, then
you must also put down all the facts that disagree with it, as well
as those that agree with it. There is also a more subtle problem.
When you have put a lot of ideas together to make an elaborate
theory, you want to make sure, when explaining what it fits, that
those things it fits are not just the things that gave you the idea
for the theory; but that the finished theory makes something else
come out right, in addition.

It’s excellent advice. Take a listen, and think how it applies to infosec.

Lean Startups & the New School

On Friday, I watched Eric Ries talk about his new Lean Startup book, and wanted to talk about how it might relate to security.

Ries concieves as startups as businesses operating under conditions of high uncertainty, which includes things you might not think of as startups. In fact, he thinks that startups are everywhere, even inside of large businesses. You can agree or not, but suspend skepticism for a moment. He also says that startups are really about management and good decision making under conditions of high uncertainty.

He tells the story of IMVU, a startup he founded to make 3d avatars as a plugin instant messenger systems. He walked through a bunch of why they’d made the decisions they had, and then said every single thing he’d said was wrong. He said that the key was to learn the lessons faster to focus in on the right thing–that in that case, they could have saved 6 months by just putting up a download page and seeing if anyone wants to download the client. They wouldn’t have even needed a 404 page, because no one ever clicked the download button.

The key lesson he takes from that is to look for ways to learn faster, and to focus on pivoting towards good business choices. Ries defines a pivot as one turn through a cycle of “build, measure, learn:”

Learn, build, measure cycle

Ries jokes about how we talk about “learning a lot” when we fail. But we usually fail to structure our activities so that we’ll learn useful things. And so under conditions of high uncertainty, we should do things that we think will succeed, but if they don’t, we can learn from them. And we should do them as quickly as possible, so if we learn we’re not successful, we can try something else. We can pivot.

I want to focus on how that might apply to information security. In security, we have lots of ideas, and we’ve built lots of things. We start to hit a wall when we get to measurement. How much of what we built changed things (I’m jumping to the assumption that someone wanted what you built enough to deploy it. That’s a risky assumption and one Ries pushes against with good reason.) When we get to measuring, we want data on how much your widget changed things. And that’s hard. The threat environment changes over time. Maybe all the APTs were on vacation last week. Maybe all your protestors were off Occupying Wall Street. Maybe you deployed the technology in a week when someone dropped 34 0days on your SCADA system. There are a lot of external factors that can be hard to see, and so the data can be thin.

That thin data is something that can be addressed. When doctors study new drugs, there’s likely going to be variation in how people eat, how they exercise, how well they sleep, and all sorts of things. So they study lots of people, and can learn by comparing one group to another group. The bigger the study, the less likely that some strange property of the participants is changing the outcome.

But in information security, we keep our activities and our outcomes secret. We could tell you, but first we’d have to spout cliches. We can’t possibly tell you what brand of firewall we have, it might help attackers who don’t know how to use netcat. And we certainly can’t tell you how attackers got in, we have to wait for them to tell you on Pastebin.

And so we don’t learn. We don’t pivot. What can we do about that?

We can look at the many, many people who have announced breaches, and see that they didn’t really suffer. We can look at work like Sensepost has offered up at BlackHat, showing that our technology deployments can be discovered by participation on tech support forums.

We can look to measure our current activities, and see if we can test them or learn from them.

Or we can keep doing what we’re doing, and hope our best practices make themselves better.