The Breach Response Market Is Broken (and what could be done)

Much of what Andrew and I wrote about in the New School has come to pass. Disclosing breaches is no longer as scary, nor as shocking, as it was. But one thing we expected to happen was the emergence of a robust market of services for breach victims. That’s not happened, and I’ve been thinking about why that is, and what we might do about it.

I submitted a short (1 1/2 page) comment for the FTC’s PrivacyCon, and the FTC has published that here.

[Update Oct 19: I wrote a blog post for IANS, “After the Breach: Making Your Response Count“]

[Update Nov 21: the folks at Abine decided to run a survey, and asked 500 people what they’d like to see a breach notice letter. Their blog post.]

Paying for Privacy: Enterprise Breach Edition

We all know how companies don’t want to be named after a breach. Here’s a random question: how much is that worth to a CEO? What would a given organization be willing to pay to keep its name out of the press? (A-priori, with at best a prediction of how the press will react.) Please don’t say a lot, please help me quantify it.

Another way to ask this question: What should a business be willing to pay to not report a security breach?

(Bonus question: how is it changing over time?)

HIPAA's New Breach Rules

Law firm Proskauer has published a client alert that “HHS Issues HIPAA/HITECH Omnibus Final Rule Ushering in Significant Changes to Existing Regulations.” Most interesting to me was the breach notice section:

Section 13402 of the HITECH Act requires covered entities to
provide notification to affected individuals and to the Secretary of
HHS following the discovery of a breach of unsecured protected
health information. HITECH requires the Secretary to post on an
HHS Web site a list of covered entities that experience breaches of
unsecured protected health information involving more than 500
individuals. The Omnibus Rule substantially alters the definition of
breach. Under the August 24, 2009 interim final breach notification
rule, breach was defined as the “acquisition, access, use, or
disclosure of protected health information in a manner not permitted
under [the Privacy Rule] which compromises the security or privacy
of the protected health information.” The phrase “compromises the
security or privacy of [PHI]” was defined as “pos[ing] a significant risk
of financial, reputational, or other harm to the individual.”

According to HHS, “some persons may have interpreted the risk of
harm standard in the interim final rule as setting a much higher
threshold for breach notification than we intended to set. As a result
we have clarified our position that breach notification is necessary in
all situations except those in which the covered entity or business
associate, as applicable, demonstrates that there is a low probability
that the protected health information has been compromised. . . .”

The client alert goes on to lay out the four risk factors that must be considered.

I’m glad to see this. The prior approach has been a full employment act for lawyers, and a way for organizations to weasel out of their ethical and legal obligations. We are likely to see more regulatory updates of this form, despite intensive lobbying.

If organizations want a different risk threshold, it’s up to them to propose one that’s credible to regulators and the public.

Breach Analysis: Data Source biases

Bob Rudis has an fascinating and important post “Once More Into The [PRC Aggregated] Breaches.” In it, he delves into the various data sources that the Privacy Rights Clearinghouse is tracking.

In doing so, he makes a strong case that data source matters, or as Obi-Wan said, “Luke, you’re going to find that many of the truths we cling to depend greatly on our own point of view:”

Breach count metatype year 530x353

I don’t want to detract from the work Bob’s done. He shows pretty clearly that human and accidental factors are exceeding technical ones as a source of incidents that reveal PII. Without detracting from that important result, I do want to add two points.

First, I reported a similar result in work released in Microsoft SIR v11, “Zeroing in on Malware Propagation Methods.” Of course, I was analyzing malware, rather than PII incidents. We need to get away from the idea that security is a purely technical problem.

Second, it’s time to extend our reporting regimes so that there’s a single source for data. The work done by non-profits like the Open Security Foundation and the Privacy Rights Clearinghouse has been awesome. But these folks are spending a massive amount of energy to collect data that ought to be available from a single source.

As we talk about mandatory breach disclosure and reporting, new laws should create and fund a single place where those reports must go. I’m not asking for additional data here (although additional data would be great). I’m asking that the reports we have now all go to one additional place, where an authoritative record will be published.

Of course, anyone who studies statistics knows that there’s often different collections, and competition between resources. You can get your aircraft accident data from the NTSB or the FAA. You can get your crime statistics from the FBI’s Unified Crime Reports or the National Crime Victimization Survey, and each has advantages and disadvantages. But each is produced because we consider the data an important part of overcoming the problem.

Many nations consider cyber-security to be an important problem, and it’s an area where new laws are being proposed all the time. These new laws really must make the data easier for more people to access.

Breach Notification in France

Over at the Proskauer blog, Cecile Martin writes “Is data breach notification compulsory under French law?

On May 28th, the Commission nationale de l’informatique et des libertés (“CNIL”), the French authority responsible for data privacy, published guidance on breach notification law affecting electronic communications service providers. The guidance was issued with reference to European Directive 2002/58/EC, the e-Privacy Directive, which imposes specific breach notification requirements on electronic communication service providers.

In France, all data breaches that affect electronic communication service providers need to be reported [to CNIL], regardless of the severity. Once there is a data breach, service providers must immediately send written notification to CNIL, stating the following…

This creates a fascinating data set at CNIL. I hope that they’ll operate with a spirit of transparency, and produce in depth analysis of the causes of breaches and the efficacy of the defensive measures that companies employ.

Why Breach Disclosures are Expensive

Mr. Tripathi went to work assembling a crisis team of lawyers and customers and a chief security officer. They hired a private investigator to scour local pawnshops and Craigslist for the stolen laptop. The biggest headache, he says, was deciphering how much about the breach his nonprofit needed to disclose…Mr. Tripathi said he quickly discovered just how many ways there were to count to 500. The law requires disclosure only in cases that “pose a significant risk of financial, reputational or other harm to the individual affected.” His team spent hours poring over a backup of the stolen laptop files.
(“Digital Data on Patients Raises Risk of Breaches“, Nicole Perlroth, The New York Times, Dec 18 2011)

This is the effect of trigger provisions: it’s the biggest headache in dealing with a breach. We shouldn’t be burdening businesses with the decision about what a significant risk entails, exposing them to the liability of making a wrong call, or risking that their decisions will be biased.

Big Brother Watch report on breaches

Over at the Office of Inadequate Security, Dissent says everything you need to know about a new report from the UK’s Big Brother Watch:

Extrapolating from what we have seen in this country, what the ICO learns about is clearly only the tip of the iceberg there. I view the numbers in the BBW report as a significant underestimate of the number of breaches that actually occurred because not only are we not hearing from 9% of entities, but many authorities that did report probably did not detect or learn of all of the breaches they actually experienced. BBC notes, “For example, it does seem surprising that in 263 local authorities, not even a single mobile phone or memory stick was lost.” “Surprising” is a very diplomatic word. (“What They Didn’t Know: Big Brother Watch report on breaches highlights why we need mandatory disclosure“)

Representative Bono-Mack on the Sony Hack

There’s a very interesting discussion on C-SPAN about the consumer’s right to know about breaches and how the individual is best positioned to decide how to react. “Representative Bono Mack Gives Details on Proposed Data Theft Bill.”

I’m glad to see how the debate is maturing, and how no one bothered with some of the silly arguments we’ve heard in the past.

Data breach fines will prolong the rot

The UK’s Financial Services Authority has imposed a £2.28 million fine for losing a disk containing the information about 46,000 customers. (Who was fined is besides the point here.)

I agree heartily with John Dunn’s “Data breach fines will not stop the rot,” but I’d like to go further: Data breach fines will prolong the rot.

In particular, fines encourage firms to hide their problems. Let’s say you believe the widely quoted cost of a breach numbers of $197 or $202 per record. At $202 per, breach response and notification would run $9,292,000 (2.6 times greater than the $3,522,000 fine.)

At some point, one or more executives makes a call between the disclosure and the risk of penalties for ignoring the law. If a fine were independent of the disclosure, then the fine would not influence disclosure. But fines are not independent. They are highly dependent on businesses first deciding to disclose. The fine may well get worse if you’ve concealed the error. But fines are highly uncertain. First, the size of the fine isn’t known, and second, if a fine will be imposed is unknown. So unless breach fines are regularly huge, sweeping things under the rug will make more sense than inviting them.

In fact, the rational choice for a firm is to wait until total non-notification penalties are (1/p)*c where p is the expected probability of a fine and c is the expected cost of notification. Given estimates of 1/2 to 9/10 of breaches going unreported, that would entail fines from $400 to $2,000 per record. For the breach that started me thinking about this, that’s $18-92 million. Let’s call it 50 million bucks.

For those wanting to deter breaches, and those wanting to punish the firms which lose control of data, that may be attractive. But for context, for a 2005 explosion which killed 15 people and injured 170 more, BP was fined $50 million, and a single fatality at a wheat handling facility lead to a fine of 1.61 million.

Is this breach of the same magnitude of a problem that kills 15? I have trouble seeing it as being of that magnitude. Maybe if we had a better understanding of the link between different breaches and their impact on real people, we could better assess. Maybe 1500 of those people whose data was lost will spend the next five years unable to live their lives because of the lingeringly corrupt databases that result. Maybe the fraud and corruption are a result of this breach. Unfortunately, despite the growing number of states that call for a risk assessment before notification, such risk assessments are, at best, a set of guesses strung together by well-meaning professionals. More likely, they’re CYA and justification for not notifying. When I say “more likely,” that’s my analysis of motivations and economics. It’s better grounded than any post-breach risk assessment I’ve seen.

I am deeply sympathetic to the desire to punish those who put others at risk, both to deter and for the punitive value.

But fines won’t reliably do that. They will prolong the rot.

Breach Laws & Norms in the UK & Ireland

Ireland has proposed a new Data Breach Code of Practice, and Brian Honan provides useful analysis:

The proposed code strives to reach a balance whereby organisations that have taken appropriate measures to protect sensitive data, e.g. encryption etc., need not notify anybody about the breach, nor if the breach affects non-sensitive personal data or small amounts of sensitive personal data. Yet, companies who have not taken the appropriate measures will indeed be obliged to admit to their shortcomings and shoulder the responsibility for same.

The other benefit I see from this proposed code is how as an industry we all can learn from the mistakes or misfortunes of those who suffer a breach. I believe we would not have as many encrypted laptops and other mobile devices that we do today were it not been for the widespread publicity of lost unencrypted devices in the past.

Meanwhile, in the UK, the “Information Commissioner’s Office will not compel companies to report data losses:”

“Under the Data Protection Act organisations have an obligation to ensure that personal information is held securely. We encourage organisations to advise us as soon as they are aware of a data breach which puts their customers at risk,” the ICO said.

“Changes to the law are ultimately a matter for the government. Should legislation be proposed to compel UK organisations to notify people when a data breach occurs, it must be properly considered before it is introduced in the UK. ”