Do Audit Failures Mean That Audit Fails In General?
Iang’s posts are, as a rule, really thought provoking, and his latest series is no exception.
In his most recent post, How many rotten apples will spoil the barrel, he asks:
So we are somewhere in-between the extremes. Some good, some bad. The question then further develops into whether the ones that are good are sufficiently valuable to overcome the ones that are bad. That is, one totally fraudulent result can be absorbed in a million good results. Or, if something is audited, even badly or with a percentage chance of bad results, some things should be improved, right?
This is a fascinating question. How do we measure how well Audit works? Are we, in fact, better off Auditing even with the issues we’ve recently faced? Or as Ian puts it:
How many is a few? One failed audit is not enough. But 10 might be, or 100, or 1% or 10%, it all depends. So we need to know some sort of threshold, past which, the barrel is worthless. Once we determine that some percentage of audits above the threshold are bad, all of them are dead, because confidence in the system fails and all audits become ignored by those that might have business in relying on them.
We clearly need someone with a Levitt-eque mindset who can come up with a creative way of solving this measurement problem we have on our hands…
Thanks, it is certainly an overdue question that we should be able to put a positive value on the audit.
In the case of CA systems auditing, the benefit outside admittance to the browsers is mostly handwaving, and some sense of the benefit of certificates (not easy to make high).
We could probably estimate a cost on such an audit. That would be standard business school NPV costing. It’s not cheap, which certainly makes the point.