There’s a good post at Gartner pointing out the lack of data reported by vendors or customers regarding the false positive rates for anti-spam solutions.
Although Gartner customers almost never complain about false positive rates, I wonder if false positives are under estimated. End users rarely complain about false positives, but they are very vocal reporting Spam in their inbox. Box Sentry (www.boxsentry.com) recently did a tests in a number of organizations and found the false positive rate in some organizations using popular anti-spam tools was as high as 13% of legitimate emails. The largest proportion of false positives in their study was legitimate person-to-person traffic. While it could be that these organizations have over-tuned their systems to block more Spam at the expense of quarantining more legit email, the reality was the email administrators had no idea they had such a high false positive rate because they never checked. Have you?
Going further, it would be very valuable to estimate the cost of false positives.
As I’ve discussed in a previous post, this is just another instance of a general problem in the security industry. You can’t do rational analysis of effectiveness, cost-effectiveness, risk, and the rest without some estimate of false positive rates and their costs.