$35M for Covering up A Breach

The remains of Yahoo just got hit with a $35 million fine because it didn’t tell investors about Russian hacking.” The headline says most of it, but importantly, “‘We do not second-guess good faith exercises of judgment about cyber-incident disclosure. But we have also cautioned that a company’s response to such an event could be so lacking that an enforcement action would be warranted. This is clearly such a case,’ said Steven Peikin, Co-Director of the SEC Enforcement Division.”

A lot of times, I hear people, including lawyers, get very focused on “it’s not material.” Those people should study the SEC’s statement carefully.

Doing Science With Near Misses

Last week at Art into Science, I presented “That was Close! Doing Science with Near Misses” (Slides as web page, or download the pptx.)

The core idea is that we should borrow from aviation to learn from near misses, and learn to protect ourselves and our systems better. The longer form is in the draft “That Was Close! Reward Reporting of Cybersecurity ‘Near Misses’Voluntary Reporting of Cybersecurity “Near Misses”

The talk was super-well received and I’m grateful to Sounil Yu and the participants in the philosphy track, who juggled so we could collaborate and brainstorm. If you’d like to help, by far the most helpful way would be to tell us about a near miss you’ve experienced using our form, and give us feedback on the form. Since Thursday, I’ve added a space for that feedback, and made a few other suggested adjustments which were easy to implement.

If you’ve had a chance to think about definitions for either near misses or accidents, I’d love to hear about those, in comments, in your blog (trackbacks should work), or whatever works for you. If you were at Art Into Science, there’s a #near-miss channel on the conference Slack, and I’ll be cleaning up the notes.

Image from the EHS Database, who have a set of near miss safety posters.

The Breach Response Market Is Broken (and what could be done)

Much of what Andrew and I wrote about in the New School has come to pass. Disclosing breaches is no longer as scary, nor as shocking, as it was. But one thing we expected to happen was the emergence of a robust market of services for breach victims. That’s not happened, and I’ve been thinking about why that is, and what we might do about it.

I submitted a short (1 1/2 page) comment for the FTC’s PrivacyCon, and the FTC has published that here.

[Update Oct 19: I wrote a blog post for IANS, “After the Breach: Making Your Response Count“]

[Update Nov 21: the folks at Abine decided to run a survey, and asked 500 people what they’d like to see a breach notice letter. Their blog post.]

Paying for Privacy: Enterprise Breach Edition

We all know how companies don’t want to be named after a breach. Here’s a random question: how much is that worth to a CEO? What would a given organization be willing to pay to keep its name out of the press? (A-priori, with at best a prediction of how the press will react.) Please don’t say a lot, please help me quantify it.

Another way to ask this question: What should a business be willing to pay to not report a security breach?

(Bonus question: how is it changing over time?)

HIPAA's New Breach Rules

Law firm Proskauer has published a client alert that “HHS Issues HIPAA/HITECH Omnibus Final Rule Ushering in Significant Changes to Existing Regulations.” Most interesting to me was the breach notice section:

Section 13402 of the HITECH Act requires covered entities to
provide notification to affected individuals and to the Secretary of
HHS following the discovery of a breach of unsecured protected
health information. HITECH requires the Secretary to post on an
HHS Web site a list of covered entities that experience breaches of
unsecured protected health information involving more than 500
individuals. The Omnibus Rule substantially alters the definition of
breach. Under the August 24, 2009 interim final breach notification
rule, breach was defined as the “acquisition, access, use, or
disclosure of protected health information in a manner not permitted
under [the Privacy Rule] which compromises the security or privacy
of the protected health information.” The phrase “compromises the
security or privacy of [PHI]” was defined as “pos[ing] a significant risk
of financial, reputational, or other harm to the individual.”

According to HHS, “some persons may have interpreted the risk of
harm standard in the interim final rule as setting a much higher
threshold for breach notification than we intended to set. As a result
we have clarified our position that breach notification is necessary in
all situations except those in which the covered entity or business
associate, as applicable, demonstrates that there is a low probability
that the protected health information has been compromised. . . .”

The client alert goes on to lay out the four risk factors that must be considered.

I’m glad to see this. The prior approach has been a full employment act for lawyers, and a way for organizations to weasel out of their ethical and legal obligations. We are likely to see more regulatory updates of this form, despite intensive lobbying.

If organizations want a different risk threshold, it’s up to them to propose one that’s credible to regulators and the public.

Breach Analysis: Data Source biases

Bob Rudis has an fascinating and important post “Once More Into The [PRC Aggregated] Breaches.” In it, he delves into the various data sources that the Privacy Rights Clearinghouse is tracking.

In doing so, he makes a strong case that data source matters, or as Obi-Wan said, “Luke, you’re going to find that many of the truths we cling to depend greatly on our own point of view:”

Breach count metatype year 530x353

I don’t want to detract from the work Bob’s done. He shows pretty clearly that human and accidental factors are exceeding technical ones as a source of incidents that reveal PII. Without detracting from that important result, I do want to add two points.

First, I reported a similar result in work released in Microsoft SIR v11, “Zeroing in on Malware Propagation Methods.” Of course, I was analyzing malware, rather than PII incidents. We need to get away from the idea that security is a purely technical problem.

Second, it’s time to extend our reporting regimes so that there’s a single source for data. The work done by non-profits like the Open Security Foundation and the Privacy Rights Clearinghouse has been awesome. But these folks are spending a massive amount of energy to collect data that ought to be available from a single source.

As we talk about mandatory breach disclosure and reporting, new laws should create and fund a single place where those reports must go. I’m not asking for additional data here (although additional data would be great). I’m asking that the reports we have now all go to one additional place, where an authoritative record will be published.

Of course, anyone who studies statistics knows that there’s often different collections, and competition between resources. You can get your aircraft accident data from the NTSB or the FAA. You can get your crime statistics from the FBI’s Unified Crime Reports or the National Crime Victimization Survey, and each has advantages and disadvantages. But each is produced because we consider the data an important part of overcoming the problem.

Many nations consider cyber-security to be an important problem, and it’s an area where new laws are being proposed all the time. These new laws really must make the data easier for more people to access.

Breach Notification in France

Over at the Proskauer blog, Cecile Martin writes “Is data breach notification compulsory under French law?

On May 28th, the Commission nationale de l’informatique et des libertés (“CNIL”), the French authority responsible for data privacy, published guidance on breach notification law affecting electronic communications service providers. The guidance was issued with reference to European Directive 2002/58/EC, the e-Privacy Directive, which imposes specific breach notification requirements on electronic communication service providers.

In France, all data breaches that affect electronic communication service providers need to be reported [to CNIL], regardless of the severity. Once there is a data breach, service providers must immediately send written notification to CNIL, stating the following…

This creates a fascinating data set at CNIL. I hope that they’ll operate with a spirit of transparency, and produce in depth analysis of the causes of breaches and the efficacy of the defensive measures that companies employ.

Why Breach Disclosures are Expensive

Mr. Tripathi went to work assembling a crisis team of lawyers and customers and a chief security officer. They hired a private investigator to scour local pawnshops and Craigslist for the stolen laptop. The biggest headache, he says, was deciphering how much about the breach his nonprofit needed to disclose…Mr. Tripathi said he quickly discovered just how many ways there were to count to 500. The law requires disclosure only in cases that “pose a significant risk of financial, reputational or other harm to the individual affected.” His team spent hours poring over a backup of the stolen laptop files.
(“Digital Data on Patients Raises Risk of Breaches“, Nicole Perlroth, The New York Times, Dec 18 2011)

This is the effect of trigger provisions: it’s the biggest headache in dealing with a breach. We shouldn’t be burdening businesses with the decision about what a significant risk entails, exposing them to the liability of making a wrong call, or risking that their decisions will be biased.

Big Brother Watch report on breaches

Over at the Office of Inadequate Security, Dissent says everything you need to know about a new report from the UK’s Big Brother Watch:

Extrapolating from what we have seen in this country, what the ICO learns about is clearly only the tip of the iceberg there. I view the numbers in the BBW report as a significant underestimate of the number of breaches that actually occurred because not only are we not hearing from 9% of entities, but many authorities that did report probably did not detect or learn of all of the breaches they actually experienced. BBC notes, “For example, it does seem surprising that in 263 local authorities, not even a single mobile phone or memory stick was lost.” “Surprising” is a very diplomatic word. (“What They Didn’t Know: Big Brother Watch report on breaches highlights why we need mandatory disclosure“)

Representative Bono-Mack on the Sony Hack

There’s a very interesting discussion on C-SPAN about the consumer’s right to know about breaches and how the individual is best positioned to decide how to react. “Representative Bono Mack Gives Details on Proposed Data Theft Bill.”

I’m glad to see how the debate is maturing, and how no one bothered with some of the silly arguments we’ve heard in the past.