Category: argument

A Day of Reckoning is Coming

Over at The CMO Site, Terry Sweeney explains that “Hacker Attacks Won’t Hurt Your Company Brand.” Take a couple of minutes to watch this.


Let me call your attention to this as a turning point for a trend. Those of us in the New School have been saying this for several years, but the idea that a breach is unlikely to kill your organization is spreading, because it’s backed by data.

That’s a good thing for folks who are in the New School, but not so good for others. If you’ve been spreading FUD (even with the best of intentions), you’re going to face some harsh questions.

By regularly making claims which turn out to be false, people undermine their credibility. If you’re one of those people, expect questions from those outside security who’ve heard you make the claim. The questions will start with the claim of brand damage, but they might not end there. They’ll continue into other areas where neither the questioner or you have any data. If you make good calls in the absence of data, then that’s ok. Leaders always make calls with insufficient data. What’s important is that they’re good calls. And talking about brand damage no longer looks like a good call, an ok call, or even a defensible call. It’s something that should have stopped years ago. If you’re still doing it, you’re creating problems for yourself.

Even worse, you’re creating problems for security professionals in general. There’s a very real problem with our community spreading fear, and even those of us who have been pushing back against it have to deal with the perception that our community thrives on FUD.

If you’ve been making this claim, your best move is to start repudiating it. Get ahead of the curve before it hits you. Or polish up your resume. Maybe better to do both.

Terry Sweeny is right. Hacker attacks won’t hurt your company brand. And claims that they do hurt security’s brand.

[Update: I’ve responded to two classes of comments in “Requests for a proof of non-existence” and “A critique of Ponemon Institute methodology for “churn”.” Russell has added an “in-depth critique of Ponemon’s method for estimating ‘cost of data breach’.”]

Referencing Insiders is a Best Practice

You might argue that insiders are dangerous. They’re dangerous because they’re authorized to do things, and so monitoring throws up a great many false positives, and raises privacy concerns. (As if anyone cared about those.) And everyone in information security loves to point to insiders as the ultimate threat.

I’m tempted to claim this as a nail in the coffin for the insider as the most important threat vector, but of late, I’ve decided that the insider is an near-unkillable boogeyman, and so ‘nails in the coffin’ is the wrong metaphor. Really, this just indicates that references to insiders are a best practice, and we can’t kill them. We can, however, treat those references as an indicator that the person speaking is probably not an empiricist, and discount appropriately.

Be celebratory, be very celebratory

A reminder for those of you who haven’t read or watched “V for Vendetta” one time too many, it’s Guy Fawkes Day today:

The plan was to blow up the House of Lords during the State Opening of Parliament on 5 November 1605…

…Fawkes, who had 10 years of military experience fighting in the Spanish Netherlands in suppression of the Dutch Revolt, was given charge of the explosives.

The plot was revealed to the authorities in an anonymous letter sent to William Parker, 4th Baron Monteagle, on 26 October 1605. During a search of the House of Lords at about midnight on 4 November 1605, Fawkes was discovered guarding 36 barrels of gunpowder – enough to reduce the House of Lords to rubble – and arrested.

Guy Fawkes day is a celebratory event in the UK with fireworks and bonfires.  It’s also when some of my ex-pat friends stock up on fireworks to ensure they can be suitably obnoxious on the 4th of July, but that’s another story…

So why is it that in England, a failed terror plot has become an excuse to have a party, whereas in the U.S., a failed or thwarted terror plot is  an excuse to strip away Civil Liberties?

Michael Healey: Pay Attention (Piling On)

Richard Bejtlich has a post responding to an InformationWeek article written by Michael Healey, ostensibly about end user security.  Richard  upbraids Michael for writing the following:

Too many IT teams think of security as their trump card to stop any discussion of emerging tech deemed too risky…

Are we really less secure than we were 10 years ago? Probably not…

…security folks are so jumpy. But they’re missing the message that CIOs need to hear: Security is working. It’s been more than a decade (yes, 10 years) since any particular security flaw has had a truly widespread impact. The Melissa and the ILoveYou attacks were the last.

Now Richard dresses down Mike regarding his naivete’ about the threat landscape.  Using Melissa and ILoveYou as examples of aggregate risk to Internet participation is of course, silly.  But the lesson doesn’t stop there. Michael,even if your organization hasn’t had a recent, significant breach – there’s very little evidence to suggest that this is because “it’s working”.   It could very well be “good luck” based on a lack of frequency (in threat actions). Think of it this way, while I’m sure there are parts of Oklahoma that haven’t been hit by a Tornado in recorded history, that doesn’t mean that I’d move into a mobile home there.

Let me also pile on by mentioning that the Verizon DBIR data set shows a significant uptick in the use of custom malware by threat agents (you know, the kind designed to evade signature based defenses) in data breaches.

Speaking of which, let me share with you a few thoughts on impact and loss.  In the past four years, we can account for nearly a billion records (credit cards and other PII) known to be compromised.  And that’s *just* the Verizon/Secret Service data.  You could probably increase that number by 12 figures by including data at risk and lost from the DLDB.  Being a journalist, I’m sure you’ll recall that here in the US have had significant IP and military secret losses, as well.

Finally Mike, there’s the problem of trying to keep up with the threat landscape.  Take Gunnar’s excellent table around web security as an example:

Do we really need to say anymore?

Alex on Science and Risk Management

Alex Hutton has an excellent post on his work blog:

Jim Tiller of British Telecom has published a blog post called “Risk Appetite, Counting Security Calories Won’t Help”. I’d like to discuss Jim’s blog post because I think it shows a difference in perspectives between our organizations. I’d also like to counter a few of the assertions he makes because I find these to be misunderstandings that are common in our industry.

“Anyone who knows me or has subjected themselves to my writings knows I have some uneasiness with today’s role of risk. It’s not the process, but more of how there is so much focus on risk as if it were a science – but it’s not. Not even close.”

Let me begin my rebuttal by first arguing that risk management, at its basis, is at least ”scientific work”. What I mean by that is elegantly summed up by Eliezer Yudkowsky on the Less Wrong blog. To use Eliezer’s words, I’ll offer that scientific work is “the reporting of the likelihood ratios for any popular hypotheses.”

You should go read “Risk Appetite: Counting Risk Calories is All You Can Do“.

Counterpoint: There is demand for security innovation

Over in the Securosis blog, Rich Mogull wrote a post “There is No Market for Security Innovation.

Rich is right that there’s currently no market, but that doesn’t mean there’s no demand. I think there are a couple of inhibitors to the market, but the key one is that transaction costs are kept high by a lack of data about outcomes. Every one of the startups selling you a product will claim that it blocks “APT” and “Data loss” but none of them have compelling data about efficacy. None of us have great, broad data about what problems lead to breaches, and none of us have data about what solutions products effectively prevent those problems. None of us have data about how often the products are deployed and managed effectively.

So when the salespeople come in with their “$204 per record” and compliance demands and all the rest, there’s no good way to distinguish between it, and as a result, the market is a slog for both real innovation and snake-oil.

If someone could innovate to address these problems, say by collecting and analyzing data about what really happens inside a company, they might have a business.

More broadly, for a market to function, there needs to be supply which exists in plenty, and demand, which exists, and a way to link them. And there’s the chasm.

I’ll also point out that we discussed innovation a bit on pages 126-127 of The New School, where we opine that much security needs to be integrated into your infrastructure and thus will be purchased from larger vendors.

Why I'm Skeptical of "Due Diligence" Based Security

Some time back, a friend of mine said “Alex, I like the concept of Risk Management, but it’s a little like the United Nations – Good in concept, horrible in execution”.

Recently, a couple of folks have been talking about how security should just be a “diligence” function, that is, we should just prove that we’re doing best efforts and that should be enough.  Now conceptually, I love the idea that we can prove our “compliance” or diligence and get a get out of jail free card when an incident happens.  I always think it’s lame when good CISO’s get canned because they got “unlucky”.

Unfortunately, if risk management is infeasible, I’ve been thinking that the concept of Due Diligence Security is complete fantasy.  To carry the analogy, if Risk Management is the United Nations, then Due Diligence Security is the Justice League of Superfriends.  With He-Man.  And the animated Beatles from Yellow Submarine.  That live in the forrest with the Keebler elves and the Ewoks and where the glowing ghosts of Anakin, Obi-Wan and Yoda perform the “Chub-Chub” song with the glowing ghosts of John Lennon and George Harrison. That sort of fantasy.

DUE DILIGENCE BASED SECURITY IS AN ARGUMENT FROM IGNORANCE

Here’s the rub – lets say an incident happens.  Due Diligence only matters when there’s a court case, really.  And in most western courts of law these days, there’s still this concept of innocent until proven guilty.  This concept is known as the argument from ignorance in logic and it is known as a logical fallacy.

Now arguments from ignorance are known as logical fallacies thanks to the epistemological notion of falsification.  Paraphrasing Hume paraphrasing John Stuart Mill – we cannot prove “all swans are white” simply because we’ve observed all white swans –  BUT the observation of a single black swan is enough to prove that “not all swans are white”.   This matters in a court of law, as your ability to prove Due Diligence as a defendant will be a function your ability to prove all swans white – all systems compliant.  But the prosecution only has to show a single black swan to prove that you are NOT diligent.

Sir Karl Popper says, “Good luck with that, Mr. CISO”.

IT’S A TRAP!!!

The result is this – the CISO, in my humble opinion, will be in a worse condition because we have a really poor ability to control the expansion of sensitive information throughout the complex systems (network, system, people, organization) for which they are responsible.  Let me put it this way:  If information (and specifically, sensitive information) operates like a gas, automatically expanding to where it’s not controlled – then how can we possibly hope that the CISO can control the “escape” or leakage of information 100% of the time with no exceptions?  And a solitary exception in a forensic investigation becomes our black swan.

And therefore…   When it comes to proving Due Diligence in the court of law  – Security *screws* the CISO.  Big Time.

Everybody Should Be Doing Something about InfoSec Research

Previously, Russell wrote “Everybody complains about lack of information security research, but nobody does anything about it.”

In that post, he argues for a model where

Ideally, this program should be “idea capitalists”, knowing some people and ideas won’t payoff but others will be huge winners. One thing for sure — we shouldn’t focus this program only on people who have been “officially” annointed by some hierarchy, some certification program, or by credentials alone.

I agree that a focus on those anointed won’t help, but that doesn’t mean it’s easy to set up such an institution.

The trouble with the approach is that we have such institutions (*ARPA, venture capital) and they’ve all failed for institutional reasons. However high their aspirations, such organizations over time get flack from their funders over their failures, their bizarre and newsworthy ideas and the organizations become conservative. They trend towards “proven entrepreneurs” and incrementalism. The “Pioneer Fellows” idea does not overcome this structural issue. (There is an argument that the MacArthur genius grants overcome it. I’m not aware of any research into the relative importance of work done before and after such grants, but I have my suspicions, prejudices and best practices.)

Of course, I might be wrong. If you have a spare million bucks, please set this up, and we can see how it goes. An experiment, if you will.

Experiments are a big part of why Andrew and I focused on free availability of data. With data, those with ideas can test them. There will be a scrum of entrepreneurial types analyzing the data. Fascinating stuff will emerge from that chaos. With evidence, they will go to the extant ‘big return’ organizations and get funding. Or they’ll work for big companies and shift product directions.

That is, the issue in infosec is not a lack of interesting ideas, it’s the trouble in testing them without data. We need data to test ideas and figure out how they impact outcomes.

Human Error and Incremental Risk

As something of a follow-up to my last post on Aviation Safety, I heard this story about Toyota’s now very public quality concerns on NPR while driving my not-Prius to work last week.

Driving a Toyota may seem like a pretty risky idea these days. For weeks now, weve been hearing scary stories about sudden acceleration, failing brakes and car recalls. But as NPRs Jon Hamilton reports, assessing the risk of driving a Toyota may have more to do with emotion than statistics.

Emotion trumping statistics in a news article?  Say it isn’t so!

Mr. LEONARD EVANS (Physicist, author, Traffic Safety): The whole history of U.S. traffic safety has been one focusing on the vehicle, one of the least important factors that affects traffic safety.

HAMILTON: Studies show that the vehicle itself is almost never the sole cause of the accident. Drivers, on the other hand, are wholly to blame most of the time. A look at data on Toyotas from the National Highway Traffic Safety Administration confirms this pattern.

Evans says his review of the data show that in the decade ending in 2008, about 22,000 people were killed in vehicles made by Toyota or Lexus.

Mr. EVANS: All these people were killed because of factors that had absolutely nothing to do with any vehicle defect.

HAMILTON: Evans says during that same period, its possible, though not yet certain, that accelerator problems in Toyotas played a role in another 19 deaths, or about two each year. Evans says people should take comfort in the fact that even if an accelerator does stick, drivers should usually be able to prevent a crash.

(bold mine)

From 1998 to 2008, about 2,200 people per year (out of a total of about 35,000 total vehicle deaths per year) died in Toyotas because of some sort of non-engineering failure.  During that same period, just under two people were killed per year due to the possible engineering failure.  So all this ado is about, at most, a 0.09% increase in the Toyota-specific death rate and a 0.005% increase in the overall traffic death rate.

So why is the response so excessive to the actual scope of the problem?  Because the risk is being imposed on the driver by the manufacturer.

Mr. ROPEIK[(Risk communication consultant)]: Imposed risk always feels much worse than the same risk if you chose to do it yourself. Like if you get into one of these Toyotas and they work fine, but you drive 90 miles an hour after taking three drinks. That won’t feel as scary, even though its much riskier, because you’re choosing to do it yourself.

And, lest we forget, even in the case where the accelerator did stick there was still a certain degree of human error:

Mr. EVANS: The weakest brakes are stronger than the strongest engine. And the normal instinctive reaction when you’re in trouble ought to be to apply the brakes.

My frustration is when I compare the reality of the data with most of the reporting on the subject, I think of Hicks’ Hudson’s NSFW “Game Over” rant. (Corrected per the comments.  Thanks, 3 of 5!)

After all, given that you’re more likely to die in your home (41%) than in your car (35%), you’re still statistically safer taking to the road than sitting home cowering in fear of your Prius.

Navigation