Category: argument

The Boy Who Cried Cyber Pearl Harbor

There is, yet again, someone in the news talking about a cyber Pearl Harbor.

I wanted to offer a few points of perspective.

First, on December 6th, 1941, the United States was at peace. There were worries about the future, but no belief that a major attack was imminent, and certainly not a sneak attack. Today, it’s very clear that successful attacks are a regular and effectively accepted part of the landscape. (I’ll come back to the accepted part.)

Second, insanity. One excellent definition of insanity is doing the same thing over and over again and expecting different results. Enough said? Maybe not. People have been using this same metaphor since at least 1991 (thanks to @mattdevost for the link). So those Zeros have got to be the slowest cyber-planes in the history of the cybers. So it’s insanity to keep using the metaphor, but it’s also insanity to expect that the same approaches that have brought us where we are.

Prime amongst the ideas that we need to jettison is that we can get better in an ivory tower, or a secret fusion center where top men are thinking hard about the problem.

We need to learn from each other’s experiences and mistakes. If we want to avoid having the Tacoma Narrows bridge fall again we need to understand what went wrong. When our understanding is based on secret analysis by top men, we’re forced to hope that they got the analysis right. When they show their work, we can assess it. When they talk about a specific bridge, we can bring additional knowledge and perspective to bear.

For twenty years, we’ve been hearing about these problems in these old school ways, and we’re still hearing about the same problems and the same risks.

We’ve been seeing systems compromised, and accepting it.

We need to stop talking about Pearl Harbor, and start talking about Aurora and its other victims. We need to stop talking about Pearl Harbor, and start talking about RSA, not in terms of cyber-ninjas, but about social engineering, the difficulties of upgrading or the value of defense in depth. We need to stop talking about Pearl Harbor and start talking about Buckshot Yankee, and perhaps why all those SIPRnet systems go unpatched. We need to stop talking about the Gawker breach passwords, and start talking about how they got out.

We need to stop talking in metaphors and start talking specifics.

PS: If you want me to go first, ok. Here ya go.

Your career is over after a breach? Another Myth, Busted!

I’m a big fan of learning from our experiences around breaches. Claims like “your stock will fall”, or “your customers will flee” are shown to be false by statistical analysis, and I expect we’d see the same if we looked at people losing their jobs over breaches. (We could do this, for example, via LinkedIn and DatalossDB.)

There’s another myth that’s out there about what happens after a breach, and that is that the breach destroys the career of the CISO and the entire security department. And so I’m pleased today to be able to talk about that myth. Frequently, when I bring up breaches and lessons we can learn, people bring up ChoicePoint as the ultimate counterexample. Now, ChoicePoint is interesting for all sorts of reasons, but from a stock price perspective, they’re a statistical outlier. And so I’m extra pleased to be able to discuss today’s lesson with ChoicePoint as our data point.

Last week, former ChoicePoint CISO Rich Baich was [named Wells Fargo’s] first chief information security officer. Congratulations, Rich!

Now, you might accuse me of substituting anecdote for data and analysis, and you’d be sort of right. One data point doesn’t plot a line. But not all science requires plotting a line. Oftentimes, a good experiment shows us things by being impossible under the standard model. Dropping things from the tower of Pisa shows that objects fall at the same speed, regardless of weight.

So Wells Fargo’s announcement is interesting because it provides a data point that invalidates the hypothesis “If you have a breach, your career is over.” Now, some people, less clever than you, dear reader, might try to retreat to a weaker claim “If you have a breach, your career may be over.” Of course, that “may” destroys any predictive value that the claim may have, and in fact, the claim “If [X], your career may be over,” is equally true, and equally useless, and that’s why you’re not going there.

In other words, if a breach always destroys a career, shouldn’t Rich be flipping burgers?

There’s three more variant hypotheses we can talk about:

  • “If you have a breach, your career will require a long period of rehabilitation.” But Rich was leading the “Global Cyber Threat and Vulnerability Management practice” for Deloitte and Touche, which is not exactly a backwater.
  • “If you have a breach, you will be fired for it.” That one is a bit trickier. I’m certainly not going to assert that no one has ever been fired for a breach happening. But it’s also clearly untrue. The weaker version is “if you have a breach, you may be fired for it”, and again, that’s not useful or interesting.
  • “If you have a breach, it will improve your career.” That’s also obviously false, and the weaker version isn’t faslifiable. But perhaps the lessons learned, focus, and publicity around a breach can make it helpful to your career. It’s not obviously dumber than the opposite claims.

So overall, what is useful and interesting is that yet another myth around breaches turns out to be false. So let’s start saying a bit more about what went wrong, and learning more about what’s going wrong.

Finally, again, congratulations and good luck to Rich in his new role!

Aitel on Social Engineering

Yesterday, Dave Aitel wrote a fascinating article “Why you shouldn’t train employees for security awareness,” arguing that money spent on training employees about awareness is wasted.

While I don’t agree with everything he wrote, I submit that your opinion on this (and mine) are irrelevant. The key question is “Is money spent on security awareness a good way to spend your security budget?”

The key is we have data now. As someone comments:

[Y]ou somewhat missed the point of the phishing awareness program. I do agree that annual training is good at communicating policy – but horrible at raising a persistent level of awareness. Over the last 5 years, our studies have shown that annual IA training does very little to improve the awareness of users against social engineering based attacks (you got that one right). However we have shown a significant improvement in awareness as a result of running the phishing exercises (Carronade). These exercises sent fake phishing emails to our cadet population every few weeks (depends on the study). We demonstrated a very consistent reduction to an under 5% “failure” after two iterations. This impact is short lived though. After a couple months of not running the study, the results creep back up to 40% failure.

So, is a reduction in phishing failure to under 5% a good investment? Are there other investments that bring your failure rates lower?

As I pointed out in “The Evolution of Information Security” (context), we can get lots more data with almost zero investment.

If someone (say, GAO) obtains data on US Government department training programs, and cross-correlates that with incidents being reported to US-CERT, then we can assess the efficacy of those training programs

Opinions, including mine, Dave’s, and yours, just ain’t relevant in the face of data. We can answer well-framed questions of “which of these programs best improves security” and “is that improvement superior to other investments?”

The truth, in this instance, won’t set us free. Dave Mortman pointed out that a number of regulations may require awareness training as a best practice. But if we get data, we can, if needed, address the regulations.

If you’re outraged by Dave’s claims, prove him wrong. If you’re outraged by the need to spend money on social engineering, prove it’s a waste.

Put the energy to better use than flaming over a hypothesis.

Feynman on Cargo Cult Science

On Twitter, Phil Venables said “More new school thinking from the Feynman archives. Listen to this while thinking of InfoSec.”

During the Middle Ages there were all kinds of crazy ideas, such
as that a piece of rhinoceros horn would increase potency. Then a
method was discovered for separating the ideas–which was to try
one to see if it worked, and if it didn’t work, to eliminate it.
This method became organized, of course, into science. And it
developed very well, so that we are now in the scientific age. It
is such a scientific age, in fact that we have difficulty in
understanding how witch doctors could ever have existed, when
nothing that they proposed ever really worked–or very little of
it did.

But even today I meet lots of people who sooner or later get me
into a conversation about UFOS, or astrology, or some form of
mysticism, expanded consciousness, new types of awareness, ESP, and
so forth. And I’ve concluded that it’s not a scientific world.

Details that could throw doubt on your interpretation must be
given, if you know them. You must do the best you can–if you know
anything at all wrong, or possibly wrong–to explain it. If you
make a theory, for example, and advertise it, or put it out, then
you must also put down all the facts that disagree with it, as well
as those that agree with it. There is also a more subtle problem.
When you have put a lot of ideas together to make an elaborate
theory, you want to make sure, when explaining what it fits, that
those things it fits are not just the things that gave you the idea
for the theory; but that the finished theory makes something else
come out right, in addition.

It’s excellent advice. Take a listen, and think how it applies to infosec.

Lean Startups & the New School

On Friday, I watched Eric Ries talk about his new Lean Startup book, and wanted to talk about how it might relate to security.

Ries concieves as startups as businesses operating under conditions of high uncertainty, which includes things you might not think of as startups. In fact, he thinks that startups are everywhere, even inside of large businesses. You can agree or not, but suspend skepticism for a moment. He also says that startups are really about management and good decision making under conditions of high uncertainty.

He tells the story of IMVU, a startup he founded to make 3d avatars as a plugin instant messenger systems. He walked through a bunch of why they’d made the decisions they had, and then said every single thing he’d said was wrong. He said that the key was to learn the lessons faster to focus in on the right thing–that in that case, they could have saved 6 months by just putting up a download page and seeing if anyone wants to download the client. They wouldn’t have even needed a 404 page, because no one ever clicked the download button.

The key lesson he takes from that is to look for ways to learn faster, and to focus on pivoting towards good business choices. Ries defines a pivot as one turn through a cycle of “build, measure, learn:”

Learn, build, measure cycle

Ries jokes about how we talk about “learning a lot” when we fail. But we usually fail to structure our activities so that we’ll learn useful things. And so under conditions of high uncertainty, we should do things that we think will succeed, but if they don’t, we can learn from them. And we should do them as quickly as possible, so if we learn we’re not successful, we can try something else. We can pivot.

I want to focus on how that might apply to information security. In security, we have lots of ideas, and we’ve built lots of things. We start to hit a wall when we get to measurement. How much of what we built changed things (I’m jumping to the assumption that someone wanted what you built enough to deploy it. That’s a risky assumption and one Ries pushes against with good reason.) When we get to measuring, we want data on how much your widget changed things. And that’s hard. The threat environment changes over time. Maybe all the APTs were on vacation last week. Maybe all your protestors were off Occupying Wall Street. Maybe you deployed the technology in a week when someone dropped 34 0days on your SCADA system. There are a lot of external factors that can be hard to see, and so the data can be thin.

That thin data is something that can be addressed. When doctors study new drugs, there’s likely going to be variation in how people eat, how they exercise, how well they sleep, and all sorts of things. So they study lots of people, and can learn by comparing one group to another group. The bigger the study, the less likely that some strange property of the participants is changing the outcome.

But in information security, we keep our activities and our outcomes secret. We could tell you, but first we’d have to spout cliches. We can’t possibly tell you what brand of firewall we have, it might help attackers who don’t know how to use netcat. And we certainly can’t tell you how attackers got in, we have to wait for them to tell you on Pastebin.

And so we don’t learn. We don’t pivot. What can we do about that?

We can look at the many, many people who have announced breaches, and see that they didn’t really suffer. We can look at work like Sensepost has offered up at BlackHat, showing that our technology deployments can be discovered by participation on tech support forums.

We can look to measure our current activities, and see if we can test them or learn from them.

Or we can keep doing what we’re doing, and hope our best practices make themselves better.

Are Lulz our best practice?

Over at Risky.biz, Patrick Grey has an entertaining and thought-provoking article, “Why we secretly love LulzSec:”

LulzSec is running around pummelling some of the world’s most powerful organisations into the ground… for laughs! For lulz! For shits and giggles! Surely that tells you what you need to know about computer security: there isn’t any.

And I have to admit, I’m taking a certain amount of pleasure in watching LulzSec. Whoever’s doing it are actually entertaining, when they’re not breaking the law. And even sometimes when they are. But at those times, they’re hurting folks, so it’s a little harder to chortle along.

Now Patrick’s argument is in the close, and I don’t want to ruin it, but I will:

So why do we like LulzSec?

“I told you so.”

That’s why.

The essence of this argument is that we in security have been telling management for a long time that things are broken, and we’ve been ignored. We poor, selfless martyrs. If only we’d been given the budget, we would have implemented a COBIT ISO27001 best practices program of making users leap through flaming hoops before they got their job done, and none of this would ever have happened. We here in the business of defending our organizations would love to have been effective, except we weren’t, and now we’re mother-freaking cheering a bunch of kids who can’t even spell LOL? Really? I told you so? Is that the best that we as a community will do?

Apparently.

We’re being out-communicated by folks who can’t spell.

Why are we being out-communicated? Because we expect management to learn to understand us, rather than framing problems in terms that matter to them. We come in talking about 0days, whale pharts, cross-site request jacking and a whole alphabet soup of things whose impact to the business are so crystal clear obvious that they go without saying.

And why are we being out-communicated? Because every time there’s a breach, we cover it up. We claim it wasn’t so bad. Or maybe that the poor, hapless American citizen will get tired of hearing about the breaches. And so we’re left with the Lulz crowd breaking and entering for shits and giggles to demonstrate that there are challenges in making things secure.

I don’t mean to sound like a broken record, but maybe we should start talking openly about breaches instead. Maybe then, we’d get somewhere without needing to see Sony, PBS, and Infraguard attacked. Heck, maybe if we talked about breaches, one or more of those organizations would have learned from the pain of others.

Nah.

Let’s just wait for “the world’s leaders in high-quality entertainment at your expense” to let us say I told you so.

It sure is easier than admitting our communications were sub-par.

[Thanks for the many good comments! I’ve written a follow-up post on the topic of communication, “Communicating with Executives for more than Lulz.”]

A critique of Ponemon Institute methodology for "churn"

Both Dissent and George Hulme took issue with my post Thursday, and pointed to the Ponemon U.S. Cost of a Data Breach Study, which says:

Average abnormal churn rates across all incidents in the study were slightly higher than last year (from 3.6 percent in 2008 to 3.7 percent in 2009), which was measured by the loss of customers who were directly affected by the data breach event (i.e., typically those receiving notification). The industries with the highest churn rate were pharmaceuticals, communications and healthcare (all at 6 percent), followed by financial services and services (both at 5 percent.)

Some comments:

  • 126 of the hundreds of organizations that suffered a breach were selected (no word on how) to receive a survey. 45 responded, which might be a decent response rate, but we need to know how the 126 were selected from the set of breached entities.
  • We don’t understand the baseline for customer churn. What is normal turnover? Is it the median for the last 3 years for that company? The mean for the sector last year? If we knew how normal turnover was defined, and its variance, then we could ask questions about what abnormal means. Is it the difference between management estimates and prior years? Is it the difference between a standard deviation above the mean for the sector for the past 3 years and the observed?
  • Most importantly, it’s not an actual measure of customer churn. The report states that it measured not actual customer loss, but the results of a survey that asked for:

    The estimated number of customers who will most likely terminate their relationship as a result of the breach incident. The incremental loss is abnormal turnover attributable to the breach incident. This number is an annual percentage, which is based on estimates provided by management during the benchmark interview process. [Emphasis added.]

The report has other issues, and I encourage readers to examine its claims and evidence closely. I encourage this in general, it’s not a comment unique to the Ponemon report. Some examples from a number of additional surveys, that George Hulme raised in argment in this blog post:

Briefly, the CMO council found concern about security, not any knowledge of breaches. Forrester showed that some folks are scared to shop online, which means brand doesn’t matter, or they’d shop online from trusted brands. Javelin reports 40% of consumers reporting that their relationship “changed,” and 30% reporting a choice to not purchase from the organization again. Which is at odds with even the most ‘consumer-concerned’ estimates from Ponemon, and is aligned with the idea that surveys are hard to do well.

Requests for a proof of non-existence

So before I respond to some of the questions that my “A day of reckoning” post raises, let me say a few things. First, proving that a breach has no impact on brand is impossible, in the same way that proving the non-existence of god or black swans is impossible. It will always be possible for a new breach to impact a brand.

Second, and far more importantly, I’m not the one making the surprising claim, or bringing it to the marketing department. If you are making a surprising claim, the responsibility to back it up lies on you. Ideally, someone’s going to produce a convincing and predictive theory of brand costs that works across a defined subset of the thousands of breaches in the DataLossDB or DBIR. Until they do, there are still lots and lots of breaches that have minimal effect on stock price and very little on overall brand.

Finally, the marketing department owns branding, in the same way that IT owns operational roll-outs. You need to convince them in the same way you need to convince IT to roll out a new IDS or development to implement an SDL. Information security people don’t own questions about brand any more than legal does. If you want to influence the folks who write for “the CMO site,” you’re going to have to bring data. In other words, your argument is going to have to resonate with the business leaders who think that a guy picking his nose and posting the video to YouTube is far more likely to hurt their brand.

I’ll have more on the Ponemon report & other reports cited by George Hulme here shortly.

A Day of Reckoning is Coming

Over at The CMO Site, Terry Sweeney explains that “Hacker Attacks Won’t Hurt Your Company Brand.” Take a couple of minutes to watch this.


Let me call your attention to this as a turning point for a trend. Those of us in the New School have been saying this for several years, but the idea that a breach is unlikely to kill your organization is spreading, because it’s backed by data.

That’s a good thing for folks who are in the New School, but not so good for others. If you’ve been spreading FUD (even with the best of intentions), you’re going to face some harsh questions.

By regularly making claims which turn out to be false, people undermine their credibility. If you’re one of those people, expect questions from those outside security who’ve heard you make the claim. The questions will start with the claim of brand damage, but they might not end there. They’ll continue into other areas where neither the questioner or you have any data. If you make good calls in the absence of data, then that’s ok. Leaders always make calls with insufficient data. What’s important is that they’re good calls. And talking about brand damage no longer looks like a good call, an ok call, or even a defensible call. It’s something that should have stopped years ago. If you’re still doing it, you’re creating problems for yourself.

Even worse, you’re creating problems for security professionals in general. There’s a very real problem with our community spreading fear, and even those of us who have been pushing back against it have to deal with the perception that our community thrives on FUD.

If you’ve been making this claim, your best move is to start repudiating it. Get ahead of the curve before it hits you. Or polish up your resume. Maybe better to do both.

Terry Sweeny is right. Hacker attacks won’t hurt your company brand. And claims that they do hurt security’s brand.

[Update: I’ve responded to two classes of comments in “Requests for a proof of non-existence” and “A critique of Ponemon Institute methodology for “churn”.” Russell has added an “in-depth critique of Ponemon’s method for estimating ‘cost of data breach’.”]

Navigation