Is Norton Cybercrime Index just 'Security Metrics Theater'?

Symantec’s new Norton Cybercrime Index looks like it is mostly a marketing tool. They present it as though there is solid science, data, and methods behind it, but an initial analysis shows that this is probably not the case. The only way to have confidence in this is if Symantec opens up about their algorthms and data.

I really hope that Symantec has invested serious money and resources to produce a good composite metric that meaningfully improves the ability of decision-makers to make better security decisions.  But an initial investigation leads me to believe that it is mostly a marketing ploy, at least in this initial version. Let me be the first to call it ‘Security Metrics Theater’ (with nod to Bruce S.).

Here’s the website: (all in FLASH)

Here’s a typical article:

Norton Cybercrime Index, unveiled today, rates the current state of cybercrime in a single, simple number and indicates whether the danger level is going up or down. Interested visitors can drill down for almost any level of detail. […]

The index is open-ended, like the Dow Jones Industrial Average. Symantec’s proprietary algorithm draws on many sources to produce the index, among them the Symantec Global Intelligence Network, Norton Safe Web and the millions of customers using Norton 360 Version 4.0, Norton AntiVirus 2011, and Norton Internet Security 2011. To ensure the validity of the algorithm Symantec had it analyzed by experts at the University of Texas’s Institute for Cyber Security; the experts approved.

What’s the goal?  From the FAQ (embedded in FLASH):

Symantec created the Norton Cybercrime Index to show people that cybercrime is real, it can happen to anybody, and there is something you can do to protect yourself.

How is it calculated?

…using a statistical model and algorithm, which assigns values to the number of online threats observed each day.  Threats include malware, fraud, identity theft, spam, phishing, and social engineering trickery.  Once threats are quantified and processes through an algorithm, the Norton Cybercrime Index number is generated.  The algorithm has been endorsed by the University of Texas San Antonio as a valid measurement reflecting the risk of cybercrime.”

My initial judgement

It looks like it is purely a product of Symantec’s marketing department.  There’s a massive PR effort underway via blogs, twitter, public places (e.g. London, Times Square), and probably at the RSA Conference, now underway in San Francisco. The web advertising firm Fine Design Group created the FLASH UI, and tweeted about it first.

It will be interesting to probe their methods and data, assuming that Symantec will be transparent about the “proprietary algorithm” used to compute the index.  If they really want to establish credibility, it would be irrational to treat this as proprietary, confidential, and closed, for all the obvious reasons.  ID Analytics is listed as a data provider, but there’s no evidence that their ‘advanced analytics’ are used by Symantec, only their summary data regarding personal identity theft in the US.

I’d be very surprised if any of Symantec’s metrics experts are behind it.  I don’t know of anyone in the security metrics community who has been contacted or involved as an outside expert.  They certainly haven’t presented it for peer review at last Monday’s Mini-metricon (why not?) or to the email list (why not?) or any academic conference or journal (why not?).  Searching the University of Texas at San Antonio, Institute of Cyber Security’s web site, I couldn’t find any mention of their work on this project, nor any presentation or report.  A search of Google Scholar for “cyber crime index” produced a few results, but not related to this and not from anyone at UT-SA.

Q: Who did have an early look at this?  A: Angus Kidman, a blogger from Gizmodo.  And what did he learn from his demo?  From his blog post:

“On the day of the demo, these were the top search terms being targeted for poisoning:

  • Invisible
  • Camel toe
  • Wifetube”

Right.  How very useful.  I’ll now modify my search patterns so I avoid those words today. 🙂

I don’t have  a good feeling about this

It smells like FUD in a spiffy FLASH interface. Sure, there probably is real data behind it, but it’s aggregated into an index that is supposed to mean something.  A daily index!  The FUD label fits because this presentation gives the illusion of scientific validity, precision, reliable aggregation, and meaningful signals, when that none of these are present (it appears). Using fancy words like “statistical method” and “algorithm” gives it the air of scientific validity without really saying anything.  Worse, those words hide the assumptions, judgments, fudge factors, and who-knows-what that make the index work.

My intuition about this is that Symantec marketing manager wanted to create a “daily itch” to get average people to read what ever news blips were available that day about ‘cybercrime’, which would increase the chances that they would move from ‘awareness’ to ‘action’ (= buy more Symantec stuff).  By getting this out as a daily index, any up or down moves each day will trigger some people to click the buttons to find out ‘why?’.   But this will take them to news items, but not any credible justification of why they might be at greater risk on that day, compared to the day before.

As a thought experiment, imagine a similar ‘Risk Index’ that is powered by astrology readings, numerological interpretations of Nostradamus’ texts, or some other daily signal source.  With the appropriate shroud of credibility, some number of people are going to start following it, and when ever it changes, they will seek information as to ‘what does this mean for me?’  It would serve have exactly the same function as their current design.  This doesn’t prove anything, but establishes in my mind some plausibility.

What’s the harm?

Some might argue that this is harmless or even mildly beneficial if it prompts people to be more aware of security problems and to fix their security problems.  But I think it’s harmful because it promotes a false signal and a false method for doing information security metrics — for consumers or for anyone else.

Maybe I’m wrong and this may be an important advance, or at least a step forward.   At very the least, it shows that one  major security product/service vendor spent money to define a method, collect data, and make public the results.  Prior to this, no major vendor was even spending money on it.

What to do now

Is there any way this Index could be redirected to be a more valuable and extensible project?  I hope so.  But for that to happen, those of us how care about the New School approach to security need to apply the full-court press on Symantec to open up their method and data.

Your action — contact Symantec, preferably in-person at RSA Conference, and demand they open up and also engage in the security metrics community in a serious way.  The burden of proof is on them, and if they can’t back it up then they should be shamed.

Armoring the Bombers that Came Back

Paul Kedrosky writes:

Most of us have heard the story of armoring British bombers, as it’s too good not to share, not to mention being straight from the David Brent school of management motivation. Here is the Wikipedia version:

Bomber Command’s Operational Research Section (BC-ORS), analysed a report of a survey carried out by RAF Bomber Command. For the survey, Bomber Command inspected all bombers returning from bombing raids over Germany over a particular period. All damage inflicted by German air defences was noted and the recommendation was given that armour be added in the most heavily damaged areas. Their suggestion to remove some of the crew so that an aircraft loss would result in fewer personnel loss was rejected by RAF command. [Patrick] Blackett’s team instead made the surprising and counter-intuitive recommendation that the armour be placed in the areas which were completely untouched by damage in the bombers which returned. They reasoned that the survey was biased, since it only included aircraft that returned to Britain. The untouched areas of returning aircraft were probably vital areas, which, if hit, would result in the loss of the aircraft.

…The trouble is, is it true? Did this bomber plating survey really happen, and did the the RAF, under the force of Patrick Blackett’s team’s analysis, do the contrarian thing of armoring the untouched parts of the bombers that came back?

I think it’s a fascinating question, (Paul points out how it’s spread in his post). In information security, we have a lot of ideas whose origins are lost in the mists of time. That’s all the more remarkable given that information security has been around for barely 50 years. We don’t have to lose our history.