Go read this excellent article by Ed Bellis.
Go read this excellent article by Ed Bellis.
I often say that breaches don’t drive companies out of business. Some people are asking me to eat crow because Vasco is closing its subsidiary Diginotar after the subsidiary was severely breached, failed to notify their reliant parties, mislead people when they did, and then allowed perhaps hundreds of thousands of people to fall victim to a man in the middle attack. I think Diginotar was an exception that proves the rule.
Statements about Diginotar going out of business are tautological. They take a single incident, selected because the company is going out of business, and then generalize from that. Unfortunately, Diginotar is the only CA that has gone out of business, and so anyone generalizing from it is starting from a set of businesses that have gone out of business. If you’d like to make a case for taking lessons from Diginotar, you must address why Comodo hasn’t gone belly up after *4* breaches (as counted by Moxie in his BlackHat talk).
It would probably also be helpful to comment on how Diginotar’s revenue rate of 200,000 euros in 2011 might contribute to it’s corporate parent deciding that damage control is the most economical choice, and what lessons other businesses can take.
To be entirely fair, I don’t know that Diginotar’s costs were above 200,000 euros per year, but a quick LinkedIn search shows 31 results, most of whom have not yet updated their profiles.
So take what lessons you want from Diginotar, but please don’t generalize from a set of one.
I’ll suggest that “be profitable” is an excellent generalization from businesses that have been breached and survived.
[Update: @cryptoki pointed me to the acquisition press release, which indicates 45 employees, and “DigiNotar reported revenues of approximately Euro 4.5 million for the year 2009. We do not expect 2010 to be materially different from 2009. DigiNotar’s audited statutory report for 2009 showed an operating loss of approximately Euro 280.000, but an after-tax profit of approximately Euro 380.000.” I don’t know how to reconcile the press statements of January 11th and August 30th.]
Ries concieves as startups as businesses operating under conditions of high uncertainty, which includes things you might not think of as startups. In fact, he thinks that startups are everywhere, even inside of large businesses. You can agree or not, but suspend skepticism for a moment. He also says that startups are really about management and good decision making under conditions of high uncertainty.
He tells the story of IMVU, a startup he founded to make 3d avatars as a plugin instant messenger systems. He walked through a bunch of why they’d made the decisions they had, and then said every single thing he’d said was wrong. He said that the key was to learn the lessons faster to focus in on the right thing–that in that case, they could have saved 6 months by just putting up a download page and seeing if anyone wants to download the client. They wouldn’t have even needed a 404 page, because no one ever clicked the download button.
The key lesson he takes from that is to look for ways to learn faster, and to focus on pivoting towards good business choices. Ries defines a pivot as one turn through a cycle of “build, measure, learn:”
Ries jokes about how we talk about “learning a lot” when we fail. But we usually fail to structure our activities so that we’ll learn useful things. And so under conditions of high uncertainty, we should do things that we think will succeed, but if they don’t, we can learn from them. And we should do them as quickly as possible, so if we learn we’re not successful, we can try something else. We can pivot.
I want to focus on how that might apply to information security. In security, we have lots of ideas, and we’ve built lots of things. We start to hit a wall when we get to measurement. How much of what we built changed things (I’m jumping to the assumption that someone wanted what you built enough to deploy it. That’s a risky assumption and one Ries pushes against with good reason.) When we get to measuring, we want data on how much your widget changed things. And that’s hard. The threat environment changes over time. Maybe all the APTs were on vacation last week. Maybe all your protestors were off Occupying Wall Street. Maybe you deployed the technology in a week when someone dropped 34 0days on your SCADA system. There are a lot of external factors that can be hard to see, and so the data can be thin.
That thin data is something that can be addressed. When doctors study new drugs, there’s likely going to be variation in how people eat, how they exercise, how well they sleep, and all sorts of things. So they study lots of people, and can learn by comparing one group to another group. The bigger the study, the less likely that some strange property of the participants is changing the outcome.
But in information security, we keep our activities and our outcomes secret. We could tell you, but first we’d have to spout cliches. We can’t possibly tell you what brand of firewall we have, it might help attackers who don’t know how to use netcat. And we certainly can’t tell you how attackers got in, we have to wait for them to tell you on Pastebin.
And so we don’t learn. We don’t pivot. What can we do about that?
We can look at the many, many people who have announced breaches, and see that they didn’t really suffer. We can look at work like Sensepost has offered up at BlackHat, showing that our technology deployments can be discovered by participation on tech support forums.
We can look to measure our current activities, and see if we can test them or learn from them.
Or we can keep doing what we’re doing, and hope our best practices make themselves better.
For more than a decade, California and other states have kept their newest teen drivers on a tight leash, restricting the hours when they can get behind the wheel and whom they can bring along as passengers. Public officials were confident that their get-tough policies were saving lives.
Now, though, a nationwide analysis of crash data suggests that the restrictions may have backfired: While the number of fatal crashes among 16- and 17-year-old drivers has fallen, deadly accidents among 18-to-19-year-olds have risen by an almost equal amount. In effect, experts say, the programs that dole out driving privileges in stages, however well-intentioned, have merely shifted the ranks of inexperienced drivers from younger to older teens.
“The unintended consequences of these laws have not been well-examined,” said Mike Males, a senior researcher at the Center on Juvenile and Criminal Justice in San Francisco, who was not involved in the study, published in Wednesday’s edition of the Journal of the American Medical Assn. “It’s a pretty compelling study.” (“Teen driver restrictions a mixed bag“)
As Princess Leia once said, “The more you tighten your grip, the more teenagers will slip through your fingers.”
As a result, lots of people are quoting a number of “300,000”.
Cem Paya has a good analysis of what the OCSP numbers mean, what biases might be introduced at “DigiNotar: surveying the damage with OCSP.”
To their credit, FoxIt tried to investigate the extent of the damage by monitoring OCSP logs for users checking on the status of the forged Google certificate. There is a neat YouTube video showing the geographic distribution of locations around the world over time. Unfortunately while this half-baked attempt at forensics makes for great visualization, it presents a very limited picture of impacted users.
Digitar and Fox-IT released enough that a dedicated secondary analyst like Cem can see methodological flaws in what they did. What else could we learn if we had more of the raw observations? When I read the report, I noticed the claim “A number of malicious/hacker software tools was found. These vary from commonly used tools such a the famous Cain & Abel tool to tailor made software.” This claim mixes analysis and observation. The observation is that there was software with which the analyst was not familiar. It may be that it was a perl script or other code that can be easily skimmed to see that it was “tailor made.” It may be that it was just something re-compiled to not match a hash. We don’t know. Similarly, the report claims (4.1) “In at least one script, fingerprints from the hacker are left on purpose, which were also found in the Comodo breach investigation of March 2011.” Really? On purpose? Perhaps the fingerprints were inserted as a matter of dis-information. Perhaps the Fox-IT analyst called the intruder on the phone, and he owned up to it. We don’t know.
I want to be clear that I don’t mean to be picking on Fox-IT here. My understanding is that the report they prepped came out incredibly quickly, and kudos to them for that. I’ve cherry picked two areas where I can ask for better editing, but I’m very aware that that editing comes at a cost in timeliness.
Cem’s article is very much worth reading, as is the Fox-IT report. But Cem’s analysis helps illustrate a theme of the New School, which is that we need diverse perspectives and analysis brought to bear on each report. The more data we see, the more we can learn from it. No single analysis will tell us everything we might learn. (I made a similar point here.)
I am left with a question for Cem, which I would have added to his post, but couldn’t comment there. My question is, having given all that thought to all the biases, what do you think is the probably true number (or range) of affected people?
There’s an interesting article over at CIO Insight:
The disclosure of an email-only data theft may have changed the rules of the game forever. A number of substantial companies may have inadvertently taken legislating out of the hands of the federal and state governments. New industry pressure will be applied going forward for the loss of fairly innocuous data. This change in practice has the potential to affect every CIO who collects “contact” information from consumers, maybe even from employees in an otherwise purely commercial context. (“Breach Notification: Time for a Wake Up Call“, Mark McCreary of Fox Rothschild LLP)
My perspective is that breach disclosure now hurts far less than it did a mere five years ago, and spending substantial time on analysis of “do we disclose” is returning less and less value. As companies disclose, we’re getting more and more data that CIOs can use to improve IT operations. We can, in a very real way, start to learn from each other’s mistakes.
Over the next few years, this perspective will trickle both upwards and downwards. CEOs will be confused by the desire to hide a breach, knowing that the coverup can be worse than the crime. And security professionals will be less and less able to keep saying that one breach can destroy your company in the face of overwhelming evidence to the contrary.
As the understanding spreads, so will data. We’ll see an explosion of ways to talk about issues, ways to report on them and analyze them. In a few years, we’ll see an article titled “Breach Analysis: Read it with your coffee” because daily analysis of breaches will be part of a CIO’s job.
Thanks to the Office of Inadequate Security for the pointer.
Governor Brown of California has signed a strengthened breach notification bill, which amends Sections 1798.29 and 1798.82 of the California Civil Code in important ways. Previous versions had been repeatedly vetoed by Arnold Schwarzenegger.
As described[.DOC] by its sponsor’s office, this law:
Establishes standard, core content — such as the type of information breached, time of breach, and toll-free telephone numbers and addresses of the major credit reporting agencies — for security breach notices in California; Requires public agencies, businesses, and persons subject to California’s security breach notification law, if more than 500 California residents are affected by a single breach, to send an electronic copy of the breach notification to the Attorney General; and, Requires public agencies, businesses and persons subject to California’s security breach notification law, if they are utilizing the substitute notice provisions in current law, to also provide that notification to the Office of Information Security or the Office of Privacy Protection, as applicable.
This makes California the fifteenth (!) state with a central notification provision on the books, the others being: Hawaii, Iowa, Maryland, Massachusetts, Minnesota, New Hampshire, New York, North Carolina, Oregon, Vermont, Virginia, West Virginia, Wisconsin, and Wyoming. Puerto Rico also has such a requirement. Ibid.
I’m looking forward to the resulting information, and I hope California’s Attorney General has the good sense to post all received notification letters. This will undoubtedly be easier for the state than dealing with the inevitable FOIA requests, and serves the public interest by increasing transparency.