Shostack + Friends Blog Archive

 

Premature optimization is the root of all evil

The observation is no less true of legislation than it is of code.
Case in point is the debate over whether to trigger breach notifications when a “reasonable” risk of harm or a “significant” risk of harm exists. Everybody is quick to cite California’s breach law, so I’m going to cite New York’s:

Any person or business which conducts business in New York state, and which owns or
licenses computerized data which includes private information shall disclose any breach of
the security of the system following discovery or notification of the breach in the security
of the system to any resident of New York state whose private information was, or is
reasonably believed to have been, acquired by a person without valid authorization.

New York State General Business Law § 899-aa
“Reasonably believed”. Not “reasonable risk”.
I think this standard is better. The reason is that courts are much better at telling what a reasonable person would believe than they are at assessing probabilities. Philosophically, I could argue that it is unreasonable by definition to believe anything for which you lack empirical evidence, but who am I to argue with 600 years of Anglo-Saxon jurisprudence? Which gets me to a point Adam has been writing about a bit lately — the quality of the data we have about data breaches.
I’m going to recycle part of a comment I made a couple of weeks ago: If somebody loses my PII, and by virtue of that fact my risk of being an ID theft victim increases 1%, I would say that is significant. If my risk increases .0001%, I would say it is insignificant.
However, what do we — and by that I mean anyone who can read the open literature — know about these probabilities? Do we have knowledge of how those probabilities vary across subpopulations? The answer, of course, is no.
Well obviously, even if the probability of me getting my ID stolen given a PII breach is high, I won’t care if the probability of the PII breach is low enough in the first place. Kinda like if lightning hits me, I’m gonna be dead. But lightning hardly ever hits people, so why worry. This (I think) is behind the incredibly bad “participate in an anti-fraud program and get out of notifying” loophole in one of the proposed federal bills.
Thing is, we do not (even) know how likely it is that my PII will be breached! We “know” that 150 millionish records are out there, but that is basically just a lower bound. We further do not know how likely an ID theft furthered by such a breach is.
Not knowing these basics, we should not make disclosure conditional on knowledge of states of the world about which are ignorance is so profound.
Let Congress pass a law mandating the collection of data on all breaches. Let them allocate money for ID Analytics (or a competitor, or some sort of quasi-governmental agency) to do the analysis necessary to derive the probabilities. Let this analysis, and the data behind it, be published and vetted by people who actually know their stuff. When we have knowledge, we can act.

2 comments on "Premature optimization is the root of all evil"

Comments are closed.