Shostack + Friends Blog Archive

 

The Wrong Breach Law

Last week, the Senate Judiciary committee passed the “The Personal Data Privacy and Security Act of 2007” (See more in Security Fix, Federal Data Breach Bills Clear Senate Panel:

Much of the debate over the relative strength of the various data-breach notification proposals currently circulating on Capitol Hill centers around the precise trigger for notification. In the Leahy-Specter bill, an organization would be required to disclose a data breach or loss if it posed a “significant” risk of harm to the affected consumers.

Meanwhile, the “Notification of Risk to Personal Data Act of 2007,” a bill introduced by Sen. Dianne Feinstein (D-Calif.), would require disclosure only in the event that the breach resulted in a “reasonable risk” of harm, a term of art that groups like Consumers Union say would leave companies more wiggle room in determining when to talk about a consumer data spill. The Identity Theft Prevention Act of 2007, a data breach bill approved by the Senate Commerce Committee last week, also takes this approach. Feinstein’s bill was also approved by the committee today.

Leave it to the lawyers to argue over ‘significant’ versus ‘reasonable,’ while missing the big picture. These folks are worse than the emacs/xemacs split. The liability of getting your significant/reasonable risk assessment wrong, after you’ve just made a mistake, seems quite high.

Worse, it will make the data that we can mine from Attrition/Privacy Rights Clearninghouse that much less valid, by adding sampling bias. I covered this in “Disclosure, Discretion and Statistics,” and feel it’s worth repeating as Congress debates these points.

Dissent points out that US PIRG is saying much the same thing in “Senate breach notification and data protection bills get mixed reactions.”

One comment on "The Wrong Breach Law"

  • Chris says:

    If somebody loses my PII, and by virtue of that fact my risk of being an ID theft victim increases 1%, I would say that is significant. If my risk increases .0001%, I would say it is insignificant. I think it is “reasonable” to keep me in the dark about the latter, but not the former.
    However, what do we — and by that I mean anyone who can read the open literature — know about these probabilities? Do we have knowledge of how those probabilities vary across subpopulations? The answer, of course, is no.
    Well obviously, even if the probability of me getting my ID stolen given a PII breach is high, I won’t care if the probability of the PII breach is low enough in the first place. Kinda like if lightning hits me, I’m gonna be dead. But lightning hardly ever hits people, so why worry.
    Thing is, we do not (even) know how likely it is that my PII will be breached! We “know” that 150 millionish records are out there, but that is basically just a lower bound. We further do not know how likely an ID theft furthered by such a breach is.
    Since we do not know these basics, we should not make disclosure conditional on knowledge of states of the world about which are ignorance is profound.
    Let Congress pass a law mandating the collection of data on all breaches. Let them allocate $$ for ID Analytics (or a competitor, or some sort of quasi-governmental agency) to do the analysis necessary to derive the probabilities. Let this analysis, and the data behind it, be published and vetted by people who actually know their stuff. When we have knowledge, we can act.
    More succinctly: premature optimization is the root of all evil.

Comments are closed.