Mubarak and TSA agree: No advantage to them leaving

In “TSA shuts door on private airport screening program,” CNN reports that “TSA chief John Pistole said Friday he has decided not to expand the program beyond the current 16 airports, saying he does not see any advantage to it.”

The advantage, of course, is that it generates pressure on his agency to do better. I hope that he’ll be forced to answer to John Mica, who encouraged airports to do this, and is the chairman of the committee on transportation and infrastructure.

I believe Hosni Mubarak made similar comments about not needing regime change.

Another critique of Ponemon's method for estimating 'cost of data breach'

I have fundamental objections to Ponemon’s methods used to estimate ‘indirect costs’ due to lost customers (‘abnormal churn’) and the cost of replacing them (‘customer acquisition costs’). These include sloppy use of terminology, mixing accounting and economic costs, and omitting the most serious cost categories.

Continue reading

A critique of Ponemon Institute methodology for "churn"

Both Dissent and George Hulme took issue with my post Thursday, and pointed to the Ponemon U.S. Cost of a Data Breach Study, which says:

Average abnormal churn rates across all incidents in the study were slightly higher than last year (from 3.6 percent in 2008 to 3.7 percent in 2009), which was measured by the loss of customers who were directly affected by the data breach event (i.e., typically those receiving notification). The industries with the highest churn rate were pharmaceuticals, communications and healthcare (all at 6 percent), followed by financial services and services (both at 5 percent.)

Some comments:

  • 126 of the hundreds of organizations that suffered a breach were selected (no word on how) to receive a survey. 45 responded, which might be a decent response rate, but we need to know how the 126 were selected from the set of breached entities.
  • We don’t understand the baseline for customer churn. What is normal turnover? Is it the median for the last 3 years for that company? The mean for the sector last year? If we knew how normal turnover was defined, and its variance, then we could ask questions about what abnormal means. Is it the difference between management estimates and prior years? Is it the difference between a standard deviation above the mean for the sector for the past 3 years and the observed?
  • Most importantly, it’s not an actual measure of customer churn. The report states that it measured not actual customer loss, but the results of a survey that asked for:

    The estimated number of customers who will most likely terminate their relationship as a result of the breach incident. The incremental loss is abnormal turnover attributable to the breach incident. This number is an annual percentage, which is based on estimates provided by management during the benchmark interview process. [Emphasis added.]

The report has other issues, and I encourage readers to examine its claims and evidence closely. I encourage this in general, it’s not a comment unique to the Ponemon report. Some examples from a number of additional surveys, that George Hulme raised in argment in this blog post:

Briefly, the CMO council found concern about security, not any knowledge of breaches. Forrester showed that some folks are scared to shop online, which means brand doesn’t matter, or they’d shop online from trusted brands. Javelin reports 40% of consumers reporting that their relationship “changed,” and 30% reporting a choice to not purchase from the organization again. Which is at odds with even the most ‘consumer-concerned’ estimates from Ponemon, and is aligned with the idea that surveys are hard to do well.

Requests for a proof of non-existence

So before I respond to some of the questions that my “A day of reckoning” post raises, let me say a few things. First, proving that a breach has no impact on brand is impossible, in the same way that proving the non-existence of god or black swans is impossible. It will always be possible for a new breach to impact a brand.

Second, and far more importantly, I’m not the one making the surprising claim, or bringing it to the marketing department. If you are making a surprising claim, the responsibility to back it up lies on you. Ideally, someone’s going to produce a convincing and predictive theory of brand costs that works across a defined subset of the thousands of breaches in the DataLossDB or DBIR. Until they do, there are still lots and lots of breaches that have minimal effect on stock price and very little on overall brand.

Finally, the marketing department owns branding, in the same way that IT owns operational roll-outs. You need to convince them in the same way you need to convince IT to roll out a new IDS or development to implement an SDL. Information security people don’t own questions about brand any more than legal does. If you want to influence the folks who write for “the CMO site,” you’re going to have to bring data. In other words, your argument is going to have to resonate with the business leaders who think that a guy picking his nose and posting the video to YouTube is far more likely to hurt their brand.

I’ll have more on the Ponemon report & other reports cited by George Hulme here shortly.

A Day of Reckoning is Coming

Over at The CMO Site, Terry Sweeney explains that “Hacker Attacks Won’t Hurt Your Company Brand.” Take a couple of minutes to watch this.

Let me call your attention to this as a turning point for a trend. Those of us in the New School have been saying this for several years, but the idea that a breach is unlikely to kill your organization is spreading, because it’s backed by data.

That’s a good thing for folks who are in the New School, but not so good for others. If you’ve been spreading FUD (even with the best of intentions), you’re going to face some harsh questions.

By regularly making claims which turn out to be false, people undermine their credibility. If you’re one of those people, expect questions from those outside security who’ve heard you make the claim. The questions will start with the claim of brand damage, but they might not end there. They’ll continue into other areas where neither the questioner or you have any data. If you make good calls in the absence of data, then that’s ok. Leaders always make calls with insufficient data. What’s important is that they’re good calls. And talking about brand damage no longer looks like a good call, an ok call, or even a defensible call. It’s something that should have stopped years ago. If you’re still doing it, you’re creating problems for yourself.

Even worse, you’re creating problems for security professionals in general. There’s a very real problem with our community spreading fear, and even those of us who have been pushing back against it have to deal with the perception that our community thrives on FUD.

If you’ve been making this claim, your best move is to start repudiating it. Get ahead of the curve before it hits you. Or polish up your resume. Maybe better to do both.

Terry Sweeny is right. Hacker attacks won’t hurt your company brand. And claims that they do hurt security’s brand.

[Update: I’ve responded to two classes of comments in “Requests for a proof of non-existence” and “A critique of Ponemon Institute methodology for “churn”.” Russell has added an “in-depth critique of Ponemon’s method for estimating ‘cost of data breach’.”]

A few thoughts on chaos in Tunisia

The people of Tunisia have long been living under an oppressive dictator who’s an ally of the US in our ‘war on terror.’ Yesterday, after substantial loss of life, street protests drove the dictator to abdicate. There’s lots of silly technologists claiming it was twitter. A slightly more nuanced comment is in “Sans URL” Others, particularly Jillian York, said “Not Twitter, Not WikiLeaks: A Human Revolution.” Ethan Zuckerman had insightful commentary including “What if Tunisia had a revolution, but nobody watched?” and “A reflection on Tunisia.”

That conversation is interesting and in full swing. What I want to ask about is the aftermath and the challenges that Tunisia faces. After 24 years of oppression, it’s going to be hard to build the political structures needed to create a legitimate and accepted government.

The American revolution came after years of discussion of British abuses of power. American perceptions of abuses of power like the Stamp Act combined with slow communication to the King and fast local communication to create a local political class that could assemble in a continental congress. Even so, after the American revolution, we had one entirely failed government under the Articles of Confederation, which was replaced with our current Constitution. But that was followed by the whiskey rebellion.

I bring this up because it’s easy to focus on the mechanics of government while forgetting about the soil in which it grows. Perhaps the digital world, with its ability to connect Tunisians to people living in places where we’ve worked these things out, will help. (For those foreigners who speak Arabic, or those Tunisians who speak other languages.) I’m not terribly optimistic in light of the shootings in Arizona and how quickly the online discourse devolved into “why this tragedy proves I’m right.” I’m also not optimistic given our poor understanding of our history.

I am, however, hopeful that the people of Tunisia will manage to take a collective break from the violence for long enough to work out a Tunisian approach to democracy. What would that look like? Would technology play a role?

Gunnar's Flat Tax: An Alternative to Prescriptive Compliance?

Hey everybody!

I was just reading Gunnar Peterson’s fun little back of the napkin security spending exercise, in which he references his post on a security budget “flat tax” (Three Steps To A Rational Security Budget).  This got me to thinking a bit  –

What if, instead of in the world of compliance where we now demand and audit against a de facto ISMS, what if we just demanded an audit of security spend?

Bear with me here…. If/When we demand Compliance to a group of controls, we are insisting that these controls *and their operation* have efficacy.  The emphasis there is to identify compliance “shelfware” or “zombieware”^1.  But we really don’t know other than anecdotes and deduction that the controls are effective against, or alternately more than needed for, a given organization’s threat landscape.  In addition, the effective operation of security controls requires skills and resources beyond their rote existence.  We might buy all these shiny new security controls, but if our department consists of Moe Howard, Larry Fine, and Shemp Howard, well…

Futhermore, there are plenty of controls that we can deduce or even prove are incident reducing that are *not* required when compliance demands an ISMS.  These controls never get implemented because business management now sees security as a diligence function, not a protection function.

So as I was reading Gunnar’s flat tax proposal, I started to really, really like the idea.  Perhaps a stronger alternative would be to simply require that security budget be a “flat tax” on IT spend for a company.  Instead of auditing against a list of controls and their existence, your compliance audit would simply be an exercise around reviewing budget and sanity of security spend.  By sanity, I mean “this security spend” isn’t really on trips to Bermuda, or somehow commandeered by IT for non-security projects.

Now we can argue about how much that tax would be and other details of how this might all work, but at least when I think about this at a high level it’s starting to occur to me that this approach may have several benefits.

  1. It would certainly be simpler to draw an inference as to whether more security spend increases or decreases # of, and impact of, incidents.  Not that this inference still wouldn’t be fraught with uncertainty, just “simpler” and I would question whether it would be less informative than insisting that a prescriptive ISMS has never been breached.
  2. If the “spend” audit consisted of “were the dollars actually spent” and “how sane the spending was” – it would still be up to the CISO to be able to have a defensive strategy (instead of having just a compliance strategy).  The “spend” could still be risk-based.
  3. Similarly this would help enable budget for effective security department investments ( like, say, a metrics program, training, conference attendance, threat intelligence, etc… ) that would otherwise be spend above and beyond what is currently “required”.
  4. This spend would allow security departments to be more agile.  If our ISMS compliance standards don’t change as frequently as the threats they’re supposed to defend against – it’s pretty obvious we’re screwed spending money to defend against last year’s threats. But a flat tax of spend would allow security departments to reallocate funds in the event of new, dynamic threats to the environment.
  5. This might help restart the innovation in security that draconian security standards and compliance requirements have killed. Josh Corman (among others, I’m sure) is famous for pointing out that compliance spend stifles innovation because budgets are allocated towards “must have’s”.  If you were a start up with an innovative new security tool, but it isn’t on the radar of the standards bodies (or won’t be until the new req’s 3 years from now), only the very well funded organizations will buy your product.  If I’m a CISO with a weaker budget and want the innovative product that my compliance masters don’t require, I’ll never buy it –  all my budget is spent trying to prove I can defeat threats from 2 years ago.

1 Compliance shelfware is a security spend that is done but never implemented.  Compliance zombieware is a control or security spend actually implemented, but never really utilized.

“Of course we have log management.  We have to in order to be compliant.  But it’s just zombieware, nobody ever actually reads those logs or does analysis on them…”

Dashboards are Dumb

The visual metaphor of a dashboard is a dumb idea for management-oriented information security metrics. It doesn’t fit the use cases and therefore doesn’t support effective user action based on the information. Dashboards work when the user has proportional controllers or switches that correspond to each of the ‘meters’ and the user can observe the effect of using those controllers and switches in real time by observing the ‘meters’. Dashboards don’t work when there is a loose or ambiguous connection between the information conveyed in the ‘meters’ and the actions that users might take. Other visual metaphors should work better.

Continue reading