Category: breaches

Paying for Privacy: Enterprise Breach Edition

We all know how companies don’t want to be named after a breach. Here’s a random question: how much is that worth to a CEO? What would a given organization be willing to pay to keep its name out of the press? (A-priori, with at best a prediction of how the press will react.) Please don’t say a lot, please help me quantify it.

Another way to ask this question: What should a business be willing to pay to not report a security breach?

(Bonus question: how is it changing over time?)

HHS & Breach Disclosure

There’s good analysis at “HHS breach investigations badly backlogged, leaving us in the dark

To say that I am frequently frustrated by HHS’s “breach tool” would be an understatement. Their reporting form and coding often makes it impossible to know – simply by looking at their entries – what type of breach occurred. Consider this description from one of their entries:

“Theft, Unauthorized Access/Disclosure”,”Laptop, Computer, Network Server, Email”

So what happened there? What was stolen? Everything? And what types of patient information were involved?

Or how about this description:

“Unauthorized Access/Disclosure,Paper”

What happened there? Did a mailing expose SSN in the mailing labels or did an employee obtain and share patients’ information with others for a tax refund fraud scheme? Your guess is as good as mine. And HHS’s breach tool does not include any data type fields that might let us know whether patients’ SSN, Medicare numbers, diagnoses, or other information were involved.

What can I say but, I agree?

Disclosures should talk about the incident and the data. Organizations are paying the PR cost, let’s start learning.

The incident should be specified using either the Broad Street taxonomy (covered in the MS Security Intel Report here) or Veris. It would be helpful to include details like the social engineering mail used (so we can study tactics), and detection rates for the malware, from something like VirusTotal.

For the data, it would be useful to explain (as Dissent says) what was taken. This isn’t simply a matter of general analysis, but can be used for consumer protection. For example, if you use knowledge-based backup authentication, then knowing that every taxpayer in South Carolina has had their addresses exposed tells you something about the efficacy of a question about what address you lived in in 2000. (I don’t know if that data was exposed in the SC tax breach, I’m just picking a recent example.)

Anyway, the post is worth reading, and the question of how we learn from breaches is worth discussing in depth.

New York Times gets Pwned, Responds all New School

So there’s a New York Times front page story on how “Hackers in China Attacked The Times for Last 4 Months.”

I just listened to the NPR story with Nicole Perlroth, who closed out saying:

“Of course, no company wants to come forward and voluntarily say `hey we were hacked by China, here’s how it happened, here’s what they took’ because they’re probably scared of what it will do to their stock price or their reputation. In this case, what was interesting was that it was my own employer that had been hacked. We felt that it was very important to come out with this and say ‘this is how easy it is for them to break into any US company and here’s how they’re doing it. [Link added.]

On Twitter, Pete Lindstrom suggested that “seems they are highlighting successes, not woes.” Zooko suggested several things including “perhaps since it is news, the NYT is happy to print it, because *any* news sells papers?” and “Or is this a cultural change, where people stop attempting trying to secure their perimeter and hiding their failure to do so?”

Me, I believe it’s culture change, but am aware of the risk of confirmation bias. When I think back to 2008, I think the peanut gallery would have been pointing and giggling, and I think we’re over that.

Thoughts?

"Cyber" Insurance and an Opportunity

There’s a fascinating article on PropertyCasualty360 “
As Cyber Coverage Soars, Opportunity Clicks
” (thanks to Jake Kouns and Chris Walsh for the pointer). I don’t have a huge amount to add, but wanted to draw attention to some excerpts that drew my attention:

Parisi observes that pricing has also become more consistent over the past 12 months. “The delta of the pricing on an individual risk has gotten smaller. We used to see pricing differences that would range anywhere from 50-100 percent among competing carriers in prior years,” he says.

I’m not quite sure how that pricing claim lines up with this:

“The guys that have been in the business the longest—for example, Ace, Beazley, Hiscox and AIG—their books are now so large that they handle several claims a week,” says Mark Greisiger, president of NetDiligence. Their claims-handling history presumably means these veteran players can now apply a lot of data intelligence to their risk selection and pricing.

but the claim that there’s several breaches a week impacting individual insurers gives us a way to put a lower-bound on breaches that are occurring. It’s somewhat dependent on what you mean by several, but generally, I put “several” above “a couple”, which means 3 breaches per week, or 150 per insurer per year, which is 600 between Ace, Beazley, Hiscox and AIG.

Then there’s this:

Despite a competitive market and significant capacity, underwriting appetite for high-risk classes varies widely. For instance, schools have significant PII exposure and are frequent targets of attacks, such as the October 2012 “ProjectWestWind” action by “hacktivist” group Anonymous to release personal records from more than 100 top universities.

So schools can be hard risks to place. While some U.S. carriers—such as Ace, Chartis and CNA—report being a market for this business class, Kiln currently has no appetite for educational institutions, with Randles citing factors such as schools’ lack of technology controls across multiple campuses, lack of IT budgets and extensive population of users who regularly access data.

Lastly, I’ll add that an insurance company that wants to market itself could easily leap to the front of mind for their prospective customers the way Verizon did. Think back 5 years, to when Verizon launched their DBIR. Then, I wrote:

Sharing data gets your voice out there. Verizon has just catapulted themselves into position as a player who can shape security.

That’s because of their willingness to provide data. I was going to say give away, but they’re really not giving the data away. They’re trading it for respect and credibility. (“Can You Hear Me Now?“)

I look forward to seeing which of the big insurance companies, the folks who are handling “several claims a week”, is first to market with good analysis.

South Carolina

It’s easy to feel sympathy for the many folks impacted by the hacking of South Carolina’s Department of Revenue. With 3.6 million taxpayer social security numbers stolen, those people are the biggest victims, and I’ll come back to them. It’s also easy to feel sympathy for the folks in IT and IT management, all the way up to the Governor. The folks in IT made a call to use Trustwave for PCI monitoring, because Trustwave offered PCI compliance. They also made the call to not use a second monitoring system. That decision may look easy to criticize, but I think it’s understandable. Having two monitoring systems means more than doubling staff workloads in responding. (You have to investigate everything, and then you have to correlate and understand discrepancies.)

At the same time, I think it’s possible to take important lessons from what we do know. Each of these is designed to be a testable claim.

Compliance doesn’t prevent hacking.

In his September letter to Haley, [State Inspector General] Maley concluded that while the systems of cabinet agencies he had finished examining could be tweaked and there was a need for a statewide uniform security policy, the agencies were basically sound and the Revenue Department’s system was the “best” among them. (“Foreign hacker steals 3.6 million Social Security numbers from state Department of Revenue“, Tim Smith, Greenville Online)

I believe the reason that compliance doesn’t prevent hacking is because the compliance systems are developed without knowledge of what really goes wrong. That is, they lack feedback loops. They lack testability. They lack any mechanism for ensuring that effort has payoff. (My favorite example is password expiration times. Precisely how much more secure are you with a 60 day expiration policy versus a 120 day policy? Is such a policy worth doubling staff effort?)

You don’t know how your compliance program differs from the SC DoR.

I’m willing to bet that 90% of my readers do not know what exactly the SC DoR did to protect their systems. You might know that it was PCI (and Trustwave as a vendor). But do you know all the details? If you don’t know the details, how can you assess if your program is equivalent, better, or worse? If you can’t do that, can you sleep soundly?

But actually, that’s a red herring. Since compliance programs often contain a bunch of wasted effort, knowing how yours lines up to theirs is less relevant than you’d think. Maybe you’re slacking on something they put a lot of time into. Good for you! Or not, maybe that was the thing that would have stopped the attacker, if only they’d done it a little more. Comparing one to one is a lot less interesting than comparing to a larger data set.

We don’t know what happened in South Carolina

Michael Hicks, the director of the Maryland Cybersecurity Center at the University of Maryland, said states needed a clearer understanding of the attack in South Carolina.

“The only way states can raise the level of vigilance,” Mr. Hicks said, “is if they really get to the bottom of what really happened in this attack.” (“
Hacking of Tax Records Has Put States on Guard
“, Robbie Brown, New York Times.)

Mr. Hicks gets a New School hammer, for nailing that one.

Lastly, I’d like to talk about the first victims. The 3.6 million taxpayers. That’s 77% of the 4.6 million people in the state. That would reasonably be the entire taxpaying population. We don’t know how much data was actually leaked. (What we know is a floor. Was it entire tax returns? Was it all the information that banks report? How much of it was shared data from the IRS?) We know that these victims are at long term risk and have only short term protection. We know that their SSNs are out there, and I haven’t heard that the Social Security Administration is offering them new ones. There’s a real, and under-discussed difference between SSN breaches and credit card breaches. Let’s not even talk about biometric breaches here.

At the end of the day, there’s a lot of victims of this breach. And while it’s easy to point fingers at the IT folks responsible, I’m starting to wonder if perhaps we’re all responsible. To the extent that few of us answer Mr. Hick’s question, to the extent that we don’t learn from one anothers’ mistakes, don’t we all make defending our systems harder? We should learn what went wrong, and we should learn that not talking about the root causes helps things go wrong in the future.

Without detracting from the crime that happened in South Carolina, there’s a bigger crime if we don’t learn from it.

“Update”: We now know a fair amount
The above was written and accidentally not posted a few weeks ago. I’d like to offer up my thanks to the decision makers in South Carolina for approving Mandiant’s release of a public and technically detailed version of their report, which is short and fascinating. I’d also like to thank the folks at Mandiant for writing in clear, understandable language about what happened. Nicely done, folks!

How to mess up your breach disclosure

Congratulations to Visa and Mastercard, the latest companies to not notify consumers in a prompt and clear manner, thus inspiring a shrug and a sigh from consumers.

No, wait, there isn’t a clear statement, but there is rampant speculation and breathless commentary.

It’s always nice to see clear reminders that the way to get people excited about a breach is to dribble out the information. For what little the public knows, to help Brian Krebs piece together the story and decide how the public will come to understand it because Visa and Mastercard aren’t talking, see MasterCard, VISA Warn of Processor Breach.

Why Breach Disclosures are Expensive

Mr. Tripathi went to work assembling a crisis team of lawyers and customers and a chief security officer. They hired a private investigator to scour local pawnshops and Craigslist for the stolen laptop. The biggest headache, he says, was deciphering how much about the breach his nonprofit needed to disclose…Mr. Tripathi said he quickly discovered just how many ways there were to count to 500. The law requires disclosure only in cases that “pose a significant risk of financial, reputational or other harm to the individual affected.” His team spent hours poring over a backup of the stolen laptop files.
(“Digital Data on Patients Raises Risk of Breaches“, Nicole Perlroth, The New York Times, Dec 18 2011)

This is the effect of trigger provisions: it’s the biggest headache in dealing with a breach. We shouldn’t be burdening businesses with the decision about what a significant risk entails, exposing them to the liability of making a wrong call, or risking that their decisions will be biased.

Dear Verisign: Trust requires Transparency

On their blog, Verisign made the following statement, which I’ll quote in full:

As disclosed in an SEC filing in October 2011, parts of Verisign’s non-production corporate network were penetrated. After a thorough analysis of the attacks, Verisign stated in 2011, and reaffirms, that we do not believe that the operational integrity of the Domain Name System (DNS) was compromised.

We have a number of security mechanisms deployed in our network to ensure the integrity of the zone files we publish. In 2005, Verisign engineered real-time validation systems that were designed to detect and mitigate both internal and external attacks that might attempt to compromise the integrity of the DNS.

All DNS zone files were and are protected by a series of integrity checks including real-time monitoring and validation. Verisign places the highest priority on security and the reliable operation of the DNS.

This does not suffice to restore my trust in a company to which we have delegated trust decisions across thousands of websites. Verisign concealed a breach from us, and possibly from its own management, according to Joseph Menn, who reports:

The 10-Q said that security staff responded to the attack soon afterward but failed to alert top management until September 2011. It says nothing about a continuing investigation […]

Reasonable people can differ on what constitutes a thorough analysis. Reasonable people can differ on response activity. We can probably all learn a lot from what happened. Reasonable people can’t argue that Verisign has paid some PR cost, and that they’ll continue to pay it until those who are supposed to trust them are satisfied. That satisfaction requires more than the statements made above. I’m sure Verisign would prefer that the story go away, in which case they should release the report today (with whatever minor redactions are appropriate).

If Verisign has what they believe is a thorough analysis, they need to release as a step along the way to restoring trust in their ability to operate important parts of the internet infrastructure. And Verisign need to release real information soon, before the technical public come to see them as stonewalling.

[Update: Welcome, Schneier blog readers! I wanted to clarify the status: we have a very data-free set of assertions from someone claiming to be a Symantec employee. We do not yet have a detailed report on the investigation that addresses who knew what when, and how they knew it.]

The Diginotar Tautology Club

I often say that breaches don’t drive companies out of business. Some people are asking me to eat crow because Vasco is closing its subsidiary Diginotar after the subsidiary was severely breached, failed to notify their reliant parties, mislead people when they did, and then allowed perhaps hundreds of thousands of people to fall victim to a man in the middle attack. I think Diginotar was an exception that proves the rule.

Statements about Diginotar going out of business are tautological. They take a single incident, selected because the company is going out of business, and then generalize from that. Unfortunately, Diginotar is the only CA that has gone out of business, and so anyone generalizing from it is starting from a set of businesses that have gone out of business. If you’d like to make a case for taking lessons from Diginotar, you must address why Comodo hasn’t gone belly up after *4* breaches (as counted by Moxie in his BlackHat talk).

It would probably also be helpful to comment on how Diginotar’s revenue rate of 200,000 euros in 2011 might contribute to it’s corporate parent deciding that damage control is the most economical choice, and what lessons other businesses can take.

To be entirely fair, I don’t know that Diginotar’s costs were above 200,000 euros per year, but a quick LinkedIn search shows 31 results, most of whom have not yet updated their profiles.

So take what lessons you want from Diginotar, but please don’t generalize from a set of one.

I’ll suggest that “be profitable” is an excellent generalization from businesses that have been breached and survived.

[Update: @cryptoki pointed me to the acquisition press release, which indicates 45 employees, and “DigiNotar reported revenues of approximately Euro 4.5 million for the year 2009. We do not expect 2010 to be materially different from 2009. DigiNotar’s audited statutory report for 2009 showed an operating loss of approximately Euro 280.000, but an after-tax profit of approximately Euro 380.000.” I don’t know how to reconcile the press statements of January 11th and August 30th.]

The Rules of Breach Disclosure

There’s an interesting article over at CIO Insight:

The disclosure of an email-only data theft may have changed the rules of the game forever. A number of substantial companies may have inadvertently taken legislating out of the hands of the federal and state governments. New industry pressure will be applied going forward for the loss of fairly innocuous data. This change in practice has the potential to affect every CIO who collects “contact” information from consumers, maybe even from employees in an otherwise purely commercial context. (“Breach Notification: Time for a Wake Up Call“, Mark McCreary of Fox Rothschild LLP)

My perspective is that breach disclosure now hurts far less than it did a mere five years ago, and spending substantial time on analysis of “do we disclose” is returning less and less value. As companies disclose, we’re getting more and more data that CIOs can use to improve IT operations. We can, in a very real way, start to learn from each other’s mistakes.

Over the next few years, this perspective will trickle both upwards and downwards. CEOs will be confused by the desire to hide a breach, knowing that the coverup can be worse than the crime. And security professionals will be less and less able to keep saying that one breach can destroy your company in the face of overwhelming evidence to the contrary.

As the understanding spreads, so will data. We’ll see an explosion of ways to talk about issues, ways to report on them and analyze them. In a few years, we’ll see an article titled “Breach Analysis: Read it with your coffee” because daily analysis of breaches will be part of a CIO’s job.

Thanks to the Office of Inadequate Security for the pointer.

Navigation