Sex, Lies & Cybercrime Surveys: Getting to Action
My colleagues Dinei Florencio and Cormac Herley have a new paper out, “Sex, Lies and Cyber-crime Surveys.”
Our assessment of the quality of cyber-crime surveys is harsh: they are so compromised and biased that no faith whatever can be placed in their findings. We are not alone in this judgement. Most research teams who have looked at the survey data on cyber-crime have reached similarly negative conclusions.
In the book, Andrew and I wrote “today’s security surveys have too many flaws to be useful as sources of evidence.” Dinei and Cormac were kind enough to cite that, saving me the trouble of looking it up.
I wanted to try here to carve out, perhaps, a small exception. I think of surveys as coming in two main types: surveys of things people know, and surveys of what they think. Both have the potential to be useful (although read the paper for a long list of ways in which they can be problematic.)
So there’s surveys of things people know. For example, what’s your budget, or how many people do you employ? There are people in an organization who know those things, and, starved as we are for knowledge, perhaps they would be useful to know. So maybe a survey makes sense.
But how many people Microsoft employs in security probably doesn’t matter to you. And the average of how many people Boeing, State Farm, Microsoft, Archer Daniels Midland, and Johnson & Johnson employ in security is even less useful. (Neighbors on the Fortune 500 list.) So even in the space that we might want to defend surveys, they’re not that useful.
So our desire for surveys is really evidence of how starved we are for data about outcomes and data about efficacy. We’re like the drunk looking for keys under the lamppost, not because we think the keys are there, but because there’s at least a little light.
So next time someone shows you a survey, don’t even bother to ask them what action they expect you to take, or what decision they expect you to alter, or ask them why you should accept what it says as acceptable arguments for that choice.
Rather, ask them to see the section titled “How we overcame the issues that Dinei and Cormac talked about.” It’ll save everyone a bunch of time.
Adam, thanks, flattery will get you everywhere.
I really like your segmentation into things that things people know and things that people think.
There seems to be a mis-impression out there that if you ask enough people a question the average answer somehow turns out to be very close to the truth. That’s sometimes the case, but we wanted to shine a light on the fact that it just fails spectacularly. There was a great paper in PNAS recently showing that the Wisdom of the Crowd isn’t nearly as robust as many think (and the median is must more ribust than the average): http://www.pnas.org/content/108/22/9020.full.pdf+html
I like that you point out that despair is not the right answer. Surveys done right are a great tool, even if a majority of what we currently have are simply tabulations of noise. The starved for data problem is huge; in the absence of good data bad stuff jumps in to fill the void, and gives people the impression that they know things (e.g. the magnitude of some of these problems) when really they don’t.
One other thought. Since you subtitle the post “getting to action” one of my steps for action would be that when we don’t know we say we don’t know instead of wafting in impressive sounding numbers.
What people think is basically based on what they know. So what’s the difference? It might differ slightly but it will be mostly the same. How could a person think of something that he doesn’t even know about?
AML- Who do people plan to vote for in the next election is an example of what they think, as is “do you think the economy is doing well” which drives pictures of consumer confidence, which have been a good predictor of spending.