Evolution of Information Analysis
Real briefly, something that came to me reading Marcus Ranum over at Tenable’s Blog.
Usually, when I attack pseudo-science in computer security, someone replies, “Yes, but some data is better than none at all!” Absolutely not true! Deceptive, inaccurate, and misleading data is worse than none at all, because it can encourage you to spend your time and energy barking up the wrong tree.
Let me propose the following evolutionary path towards information analysis:
Stage 1.) “Yes, but some data is better than none at all!”
Stage 2.) “Not true! It can be misinterpreted”
Stage 3.) “Prior information usually has some informative value in context. Have we done the right job in presenting uncertainty and context?”
The difference between stage 2 and 3 reminds me of the quote from IJ Good (who sadly just passed away this month) –
“The subjectivist states his judgements, whereas the objectivist sweeps them under the carpet by calling assumptions knowledge, and he basks in the glorious objectivity of science.”
The problem isn’t that we’ve got yucky/squishy/non-“actuarial quality” (whatever that means) data, the problem is in the statement of how prior information is interpreted and thorough identification of bias done in the research itself, and how uncertainty factors in the data are identified.