Category: breaches

Breach Harm: Should Arizona be required to notify?

Over at the Office of Inadequate Security, Pogo was writing about the Lulzsec hacking of Arizona State Police. Her article is “A breach that crosses the line?

I’ve been blogging for years about the dangers of breaches. I am concerned about dissidents who might be jailed or killed for their political views, abortion doctors whose lives are endangered from fringe elements, women who have tried to escape abusive spouses, porn actors whose families may be harassed by the publication of their names and addresses, confidential informants and law enforcement officers, and immigrants whose personal information was illegally revealed to law enforcement and to media by the actions of Utah state employees. All of those people have been put at risk of physical harm as a result of data breaches.

To date, what we know was taken from Arizona’s (apparently) insufficiently secured systems was names and addresses of people who have good reason to think they’re in danger from the release of that information.

I want to talk about four major risks here: The risk of harm, the risk of attributing all that risk to Lulzsec, the incentive to cover-up, and the risk of believing our analyses are complete.

The first risk, the risk of harm, Pogo covers fairly well. I have a cousin who works in a correctional facility. Their house, their phone, their cable, all these things are listed in the wife’s name, and I understand the fear of knowing that a real criminal thinks you’re at fault and knows where your family lives. I bring this up because it’s my family too, and that’s important because I’m about to discuss the apportionment of blame, and want to be clear that I’m doing so with some skin in the game.

The second risk is the risk of attributing all of the responsibility to Lulzsec. Some of the fault here is that of the State of Arizona Department of Public Safety (AZDPS). AZDPS made a decision to collect information. They had a responsibility to protect it. AZDPS also made a decision to store that information in electronic form. AZDPS made a decision to store that electronic information in an internet accessible fashion. AZDPS made decisions about computer security which, in hindsight, may be being reconsidered. However elite the ninjas of Lulzsec may or may not have been, however many lazer-eyed sharks they might have employed, if the information was only stored on paper in a locked room in Arizona, it would have been far more secure. And if Lulzsec could break in, potentially others have already broken in and stolen the data for purposes far more dangerous than embarrassing AZDPS. AZDPS is not unique in this set of choices. The organization reaps lots of benefits in putting the data online. Many of those benefits, such as speed and efficiency, are probably shared with employees, customers or citizens. All that said, Lulzsec did increase the risk by making the data widely available to anyone. (They also marginally decreased the risk by making people aware it’s out there, but the net risk is still increased.)

The third risk is the risk of cover-up. AZDPS is one of many organizations that collects information today. Like most of those organizations, AZDPS makes some investments in security to protect the data. I suspect that they make more investments than many others, since they know about the sensitivity of it and the many motivated attackers. Interestingly, their policy states that “Security methods and measures have been integrated into the design, implementation and day-to-day practices of the entire Azdps.gov web portal.” (AZDPS Privacy Policy (as of January 4, 2010, via the WayBack Machine)) which strikes me as a mature statement compared to the common “we follow industry-leading best practices in buying a firewall.” Most organizations that are hacked are not hacked by Lulzsec, and so may choose to cover up. AZDPS should investigate what went wrong, and share their analysis so others can learn from them.

The final risk is the risk of believing our analysis is complete. Much like I pointed out in “How the Epsilon Breach Hurts Consumers,” it’s easy to come to an analysis which misses important elements because the investigators have a defined scope. They are more likely to talk to those close to the system, and thus will be influenced by their perspectives and orientation. By sharing information about the breaches, different perspectives can emerge from a chaotic discussion. This is a perspective deeply influenced by Hayek. Unlike markets, information security lacks a pricing mechanism to help us bring all of the perspectives into a single sharp focus. It’s hard to add security to see what people will pay, and we lack good information about the inputs that led to breaches or other outcomes. Without that information, it’s hard to know what security is cost-effective, or appropriate in light of the duties that an information collector takes on by collecting data.

So to bring this together around those risks, the people whose data was exposed (first risk) were exposed in part because most organizations never issue a good report on what went wrong (the third risk) and so the choices made in collecting and storing data are made in an information vacuum (the second risk).

And so the Arizona DPS should take seriously their public safety mission. They should perform a deep investigation of what went wrong, and they should share it with the citizens of Arizona and people around the world. If they do so, and their counterparts do so, we’ll all be able to learn from each other’s mistakes, and we’ll all be able to, in that hated phrase “do more with less.”

That’s how public entities, operating with data about citizens, should be operating, and in my personal opinion, ought to be required to operate.

What does Coviello's RSA breach letter mean?

After spending a while crowing about the ChoicePoint breach, I decided that laughing about breaches doesn’t help us as much as analyzing them. In the wake of RSA’s recent breach, we should give them time to figure out what happened, and look forward to them fulfilling their commitment to share their experiences.

Right now we don’t know a lot, and this pair of sentences is getting a lot of attention:

Some of that information is specifically related to RSA’s SecurID two-factor authentication products. While at this time we are confident that the information extracted does not enable a successful direct attack on any of our RSA SecurID customers, this information could potentially be used to reduce the effectiveness of a current two-factor authentication implementation as part of a broader attack.

With the exception of RSA and its employees, I may be one of the best positioned people to talk about their protocols, because a long time ago, I reverse engineered their system. And when I did, I discovered that “The protocol used by Security Dynamics has substantial flaws which appear to be exploitable and reduce the security of a system using Security Dynamics software to that of a username and password.” It’s important to note that that’s from a 1996 paper, and the flaws I discovered have been corrected.

I’ve been trying to keep up with the actual facts revealed, and I’ve read a lot of analysis on what happened. In particular, Steve Bellovin’s technical analysis is quite good, and I’d like to add a little nuance and color. Bellovin writes: “Is the risk that the attackers have now learned H? If that’s a problem, H was a bad choice to start with.” In conversations after I wrote my 1996 paper, it was clear to me that John Brainard and his engineering colleagues knew that. (Their marketing department felt differently.) RSA has lots of cryptographers who still know it.

The nuance I’d like to point out is that many prominent cryptographers had reviewed their system before I noticed the key management error. So it’s possible that that lesson leads to the statement that the information could be used. That is, the crypto or implementation, however aware of Kerkhoffs’ Principle, could still contain flaws.

If someone had compromised the database of secrets that enable synchronization, then that would “enable a successful direct attack on” one or more customers. So speculation that that’s the compromise cannot be correct without the CEO of a publicly traded company lying in statements submitted to the SEC. That seems unlikely.

But there’s another layer of nuance, which we can see if we read the advice RSA has given their customers. When I read that list, it becomes apparent that the APT used a variety of social engineering attacks. So it’s possible that amongst what was stolen was a database of contacts who run SecurId deployments. That knowledge “could potentially be used to reduce the effectiveness of a current two-factor authentication implementation as part of a broader attack”

My opinion is that social engineers using the contacts database in some way is more likely than a cryptanalytic attack, and a cryptanalytic attack is more likely than a compromise of a secrets database. But we don’t know. Speculating like mad isn’t helping. Maybe I shouldn’t even post this, but the leaps of logic out there provoke some skeptical thinking.

[update: some great comments coming in, don’t skip them.]

[Update 2: Between Nicko’s comment on the new letter, and Paul Kocher’s analysis in his Threatpost podcast I’m not sure that this analysis is still valid.]

Another critique of Ponemon's method for estimating 'cost of data breach'

I have fundamental objections to Ponemon’s methods used to estimate ‘indirect costs’ due to lost customers (‘abnormal churn’) and the cost of replacing them (‘customer acquisition costs’). These include sloppy use of terminology, mixing accounting and economic costs, and omitting the most serious cost categories.

Continue reading

Visualization for Gunnar's "Heartland Revisited"

You may have heard me say in the past that one of the more interesting aspects of security breaches, for me at least, is the concept of reputation damage.  Maybe that’s because I heard so many sales tactics tied to defacement in the 90’s, maybe because it’s so hard to actually quantify brand equity and impact to brand equity from a data breach.

Either way, Gunnar’s post on “Heartland Revisited” is great analysis.  I’d like to point you there, and add two things.  First, its my personal pet hypothesis that “reputation” only really matters in B2B cases where there are individuals who are responsible for choosing the breached vendor.  Nobody wants to be the guy that “hired those screwups”, and if you are, you pretty  much automatically have to consider firing them.

Second, I thought I’d add a bit of a visualization, tracking the stock prices from just before the incident until now. By clicking on the image below to see the full graph, you’ll see that Heartland had been a leader among those four (at least using this particular metric), dropped significantly with the data breach and, as per Gunnar’s analysis, is now still trying to recover (be that from the breach or other factors or what have you – not making any inference there).

Again, I’m not trying to draw any conclusions from this, saying “See!  Reputation matters!” or even claiming that Heartland is an exception to Betsy Nichols excellent work, but I do think this is interesting, even if just as a casual observation.

Note on Design of Monitoring Systems

Dissent reports “State Department official admits looking at passport files for more than 500 celebrities.”

A passport specialist curious about celebrities has admitted she looked into the confidential files of more than 500 famous Americans without authorization.

This got me thinking: how does someone peep at 500 files before anyone notices? What’s wrong with the State Department’s IDS systems?

One can get lists of famous people pretty easily. They’re not complete, but you don’t need complete. You simply track queries against it, and look at the outliers in your peepers list.

For the State department to have takens so long to notice, they’re obviously not doing this. I join Barack Obama, Hillary Clinton and more than 500 famous people in hoping they get on it soon.

Also, I wonder if the celebs got breach notice letters?

But to the question of what can you learn from this, think about how your employees might peep, and how you can catch that behavior on the cheap.

Lessons from HHS Breach Data

PHIPrivacy asks “do the HHS breach reports offer any surprises?

It’s now been a full year since the new breach reporting requirements went into effect for HIPAA-covered entities. Although I’ve regularly updated this blog with new incidents revealed on HHS’s web site, it might be useful to look at some statistics for the first year’s worth of reports.

I’ll add that the HHS web site “Breaches Affecting 500 or More Individuals,” offers data about 181 breaches in CSV and XML formats.

But Dissent asks what we can learn. Two things strike me immediately. First, 181 breaches, no one out of business. Perhaps not a surprise, but many people seem to need reminders since the bad meme had been around so long. Second, and also in the bad meme category, let’s look at insiders. There were 10 incidents, (6% of all incidents involving 500 or more people). They impacted 50,491 people (1% of all people.) We sometimes hear that incidents involving insiders are the most damaging or impactful. The unauthorized access incidents (which is a separate category from hacking) had a lower mean number impacted than hacking, improper disposal, loss, theft, business associates, laptops, desktop computers, portable electronic devices or network servers. In fact, the only categories which impacted fewer people were “theft, unauthorized access” and “paper records.” Now, it’s true that unauthorized access is not the same word as insiders. In fact, unauthorized access likely includes both insiders and access control failures (the “spreadsheet on a website” pattern). It’s also true that there were quite damaging incidents that involved fewer than 500 people (the “peeking” pattern). It’s even possible that those were the worst incidents. But we have no evidence for that claim. Still.

But the biggest, most important lesson is that Dissent can ask not “what did HHS learn from this,” but rather, “What can we learn from this?”

Welcome to the club!

As EC readers may know, I’ve been sort of a collector of breach notices, and an enthusiastic supporter of the Open Security Foundation’s DataLossDB project. Recently, I had an opportunity to further support DataLossDB, by making an additional contribution to their Primary Sources archive – a resource I find particularly valuable.

Unfortunately, that contribution was a breach notification letter[pdf] addressed to me! Since I now have some skin in the game, I figured I’d use the opportunity to take a close look at this incident and see what can be learned from it.

Who sent the letter, and how do I reach them?

Let’s start with the letter itself. While it identifies the data owner (“EHP”, an emergency room practice I had patronized), it provides no return address, and the letter is unsigned. Unsurprisingly given this opacity, the envelope return address is a post office box. While a toll-free number is provided, this is a requirement of many state breach laws, and repeated calls to the number resulted in my being placed in an ACD queue, rather than being routed to a human being. So far, it looks to me like they’re trying to ensure that all communication regarding this issue is either squelched by the magic of painful on-hold music, or diverted into a call center. In particular, there seems to be no enthusiasm for written correspondence.

What was exposed, and how?

Now let’s consider the nature of the exposed data. According to the notification letter, a hard drive was stolen from a 3rd party service provider (Millennium Medical Management Resources). That hard drive contained “unencrypted copies of records with health and financial information about [me]”. Furthermore, the service provider

…believes the hard drive contained personally identifiable information about EHP patients, including name, address, phone number, date of birth, and Social Security Number and, in some cases, other information such as diagnosis and/or diagnosis code, types of procedure and/or procedure code, medical record number, account number, driver’s license number, and health insurance information.

Surprisingly, the letter does not say that “the exposure appears to be the work of criminals interested in the hardware” or other such language often used to suggest that crooks don’t go after data. This even though the police report notes that the “suite [was] in disarray”. Kudos to EHP for this. And kudos to the Westmont, IL PD for handling my FOIA request same day. I understand they received literally hundreds of requests for this report. Anyone who handles a dramatic, unexpected increase in work so cheerfully deserves praise.

As to what was stolen, the notification letter — seemingly drafted by an attorney — states what the service provider believes, not what the service provider knows. This suggests there is some question as to what precisely was on the unencrypted drive. Clearly, though, health and financial information are involved, suggesting that this breach is subject to HITECH and HIPAA provisions, as well as to myriad state breach laws. Reading on, this is further reinforced, when EHP says they “…will report this security breach to the Office of Civil Rights of the U.S. Department of Health and Human Services.” Such a report is required by HITECH when more than 499 persons have been affected by a breach, which establishes a lower bound for the likely number of affected individuals in this incident. (In the few days I have been composing this blog post, the report has appeared on HHS’s web site. 180,111 folks impacted by this one. Ouch. Why not put this in the letter to me, if it will be one mouse-click away anyhow?)

How long did notification take?

HITECH requires that notification occur within sixty days of the discovery of the breach. This breach was discovered March 1st. The letter is dated April 30. I wonder if the delay would have been longer, were it legally permissible?

How will future incidents be prevented?

According to the letter,the service provider has

…implemented new and improved technical, physical, and administrative security measures to prevent future thefts and security breaches, including encryption of electronic personally identifiable information stored on portable storage devices. Millennium will also take additional steps to further secure patient information.

Meanwhile,

EHP is carefully monitoring these security measures to ensure that they meet regulatory requirements and appropriately secure information about its patients.

With a letter such as this, which undoubtedly was closely crafted by people who pay attention to word choice, it seems fair to read it as attentively as it was written. An admittedly cynical interpretation is that this “careful monitoring” is a new thing for EHP. After all, they didn’t say they would “continue to carefully monitor” or would “more carefully monitor”. As to what “technical, physical, and administrative” measures Millennium might be adding, who knows? It’s hard enough to audit ones own service provider. Knowing what somebody else’s is doing is harder still.

So what can I do?

The letter concludes with sections which roughly follow the guidelines provided by various sample breach notification letters. This is impressive. After reading many notification letters, I’ve come to expect some soft-pedaling of the risk of identity theft. This one does not do that. Again, kudos.

Closing Thoughts

So this has been a long blog post about one incident and one letter, and not exactly a man bites dog situation either. Apologies. I think two things are interesting about this particular letter:

  1. For matters that pertain to breaches generally rather than to this one specifically, it was straightforward, clear, and reasonably complete. The advice about what to do, how to interact with credit bureaus, when to notify law enforcement, etc., was all sound, with little or no “spin”.
  2. With respect to the details of this specific incident, the letter was more circumspect, with — to my eyes — more parsing of words.

Unsurprising, perhaps, but (I have not done a content analysis to verify this) I wonder how typical the openness would have been three or four years ago. Perhaps, if California’s SB 1166 is signed by the Governor (rather than vetoed, as was a previous version), this greater transparency will extend to incident-specific details as well. I don’t see the harm in it. I’ve already filled in the blanks with what I think really happened to my information. There isn’t too much EHP could say that would make me feel much different about their vendor management program, or about the degree of care Millennium evinced here, so they should just say it.

Failure to Notify Leads to Liability in Germany

…a Bad Homburg business man won millions in damages in a suit against the [Liechtenstein] bank for failing to reveal that his information was stolen along with hundreds of other account holders and sold to German authorities for a criminal investigation. He argued that if the bank had informed those on the list that their data had been sold, they could have turned themselves in, receiving temporary amnesty and much lower fines. (“Taxman rakes in hundreds of millions thanks to stolen bank data“, TheLocal.de)

The decision was by the Liechtenstein high court. If anyone knows the details of the case (what duty was violated), I’d appreciate knowing more. Was it a violation of Liechtenstein bank secrecy law, or a general duty to disclose?

Via the web hacking incident database and “German Government Pays Hacker For Stolen Bank Account Data” at TacticalWebAppSec.

Navigation