Exploit Kit Statistics

On a fairly regular basis, I come across pages like this one from SANS, which contain fascinating information taken from exploit kit control panels:

Exploit Kit Control panel

There’s all sorts of interesting numbers in that picture. For example, the success rate for owning XP machines (19.61%) is three times that of Windows 7. (As an aside, the XP number is perhaps lower than “common wisdom” in the security community would have it.) There are also numbers for the success rates of exploits, ranging from Java OBE at 35% down to MDAC at 1.85%.

That’s not the only captured control panel. There’s more, for example, M86, Spider Labs and webroot.

I’m fascinated by these numbers, and have two questions:

  • Is anyone capturing the statistics shown and running statistics over time?
  • Is there an aggregation of all these captures? If not, what are the best search terms to find them?

Indicators of Impact — Ground Truth for Breach Impact Estimation

Ice bag might be a good Indicator of Impact for a night of excess.
Ice bag might be a good ‘Indicator of Impact’ for a night of excess.

One big problem with existing methods for estimating breach impact is the lack of credibility and reliability of the evidence behind the numbers. This is especially true if the breach is recent or if most of the information is not available publicly.  What if we had solid evidence to use in breach impact estimation?  This leads to the idea of “Indicators of Impact” to provide ‘ground truth’ for the estimation process.

It is premised on the view that breach impact is best measured by the costs or resources associated with response, recovery, and restoration actions taken by the affected stakeholders.  These activities can included both routine incident response and also more rare activities.  (See our paper for more.)  This leads to to ‘Indicators of Impact’, which are  evidence of the existence or non-existence of these activities. Here’s a definition (p 23 of our paper):

An ‘Indicator of Impact’ is an observable event, behavior, action, state change, or communication that signifies that the breached or affected organizations are attempting to respond, recover, restore, rebuild, or reposition because they believe they have been harmed. For our purposes, Indicators of Impact are evidence that can be used to estimate branching activity models of breach impact, either the structure of the model or key parameters associated with specific activity types. In principle, every Indicator of Impact is observable by someone, though maybe not outside the breached organization.

Of course, there is a close parallel to the now-widely-accepted idea of “Indicators of Compromise“, which are basically technical traces associated with a breach event.  There’s a community supporting an open exchange format — OpenIoC.  The big difference is that Indicators of Compromise are technical and are used almost exclusively in tactical information security.  In contrast, Indicators of Impact are business-oriented, even if they involve InfoSec activities, and are used primarily for management decisions.

From the Appendix B, here are a few examples:

  • Was there a forensic investigation, above and beyond what your organization would normally do?
  • Was this incident escalated to the executive level (VP or above), requiring them to make resource decisions or to spend money?
  • Was any significant business process or function disrupted for a significant amount of time?
  • Due to the breach, did the breached organization fail to meet any contractual obligations with it customers, suppliers, or partners? If so, were contractual penalties imposed?
  • Were top executives or the Board significantly diverted by the breach and aftermath, such that other important matters did not receive sufficient attention?

The list goes on for three pages in Appendix B but we fully expect it to grow much longer as we get experience and other people start participating.  For example, there will be indicators that only apply to certain industries or organization types.  In my opinion, there is no reason to have a canonical list or a highly structured taxonomy.

As signals, the Indicators of Impact are not perfect, nor do they individually provide sufficient evidence.  However, they have the very great benefit of being empirical, subject to documentation and validation, and potentially observable in many instances, even outside of InfoSec breach events.  In other words, they provide a ‘ground truth’ which has been sorely lacking in breach impact estimation. When assembled as a mass of evidence and using appropriate inference and reasoning methods (e.g. see this great book), Indicators of Impact could provide the foundation for robust breach impact estimation.

There are also applications beyond breach impact estimation.  For example, they could be used in resilience planning and preparation.  They could also be used as part of information sharing in critical infrastructure to provide context for other information regarding threats, attacks, etc. (See this video of a Shmoocon session for a great panel discussion regarding challenges and opportunities regarding information sharing.)

Fairly soon, it would be good to define a light-weight standard format for Indicators of Impact, possibly as an extension to VERIS.  I also think that Indicators of Impact could be a good addition to the up-coming NIST Cybersecurity Framework.  There’s a public meeting April 3rd, and I might fly out for it.  But I will submit to the NIST RFI.

Your thoughts and comments?

Breach Analysis: Data Source biases

Bob Rudis has an fascinating and important post “Once More Into The [PRC Aggregated] Breaches.” In it, he delves into the various data sources that the Privacy Rights Clearinghouse is tracking.

In doing so, he makes a strong case that data source matters, or as Obi-Wan said, “Luke, you’re going to find that many of the truths we cling to depend greatly on our own point of view:”

Breach count metatype year 530x353

I don’t want to detract from the work Bob’s done. He shows pretty clearly that human and accidental factors are exceeding technical ones as a source of incidents that reveal PII. Without detracting from that important result, I do want to add two points.

First, I reported a similar result in work released in Microsoft SIR v11, “Zeroing in on Malware Propagation Methods.” Of course, I was analyzing malware, rather than PII incidents. We need to get away from the idea that security is a purely technical problem.

Second, it’s time to extend our reporting regimes so that there’s a single source for data. The work done by non-profits like the Open Security Foundation and the Privacy Rights Clearinghouse has been awesome. But these folks are spending a massive amount of energy to collect data that ought to be available from a single source.

As we talk about mandatory breach disclosure and reporting, new laws should create and fund a single place where those reports must go. I’m not asking for additional data here (although additional data would be great). I’m asking that the reports we have now all go to one additional place, where an authoritative record will be published.

Of course, anyone who studies statistics knows that there’s often different collections, and competition between resources. You can get your aircraft accident data from the NTSB or the FAA. You can get your crime statistics from the FBI’s Unified Crime Reports or the National Crime Victimization Survey, and each has advantages and disadvantages. But each is produced because we consider the data an important part of overcoming the problem.

Many nations consider cyber-security to be an important problem, and it’s an area where new laws are being proposed all the time. These new laws really must make the data easier for more people to access.

Your career is over after a breach? Another Myth, Busted!

I’m a big fan of learning from our experiences around breaches. Claims like “your stock will fall”, or “your customers will flee” are shown to be false by statistical analysis, and I expect we’d see the same if we looked at people losing their jobs over breaches. (We could do this, for example, via LinkedIn and DatalossDB.)

There’s another myth that’s out there about what happens after a breach, and that is that the breach destroys the career of the CISO and the entire security department. And so I’m pleased today to be able to talk about that myth. Frequently, when I bring up breaches and lessons we can learn, people bring up ChoicePoint as the ultimate counterexample. Now, ChoicePoint is interesting for all sorts of reasons, but from a stock price perspective, they’re a statistical outlier. And so I’m extra pleased to be able to discuss today’s lesson with ChoicePoint as our data point.

Last week, former ChoicePoint CISO Rich Baich was [named Wells Fargo’s] first chief information security officer. Congratulations, Rich!

Now, you might accuse me of substituting anecdote for data and analysis, and you’d be sort of right. One data point doesn’t plot a line. But not all science requires plotting a line. Oftentimes, a good experiment shows us things by being impossible under the standard model. Dropping things from the tower of Pisa shows that objects fall at the same speed, regardless of weight.

So Wells Fargo’s announcement is interesting because it provides a data point that invalidates the hypothesis “If you have a breach, your career is over.” Now, some people, less clever than you, dear reader, might try to retreat to a weaker claim “If you have a breach, your career may be over.” Of course, that “may” destroys any predictive value that the claim may have, and in fact, the claim “If [X], your career may be over,” is equally true, and equally useless, and that’s why you’re not going there.

In other words, if a breach always destroys a career, shouldn’t Rich be flipping burgers?

There’s three more variant hypotheses we can talk about:

  • “If you have a breach, your career will require a long period of rehabilitation.” But Rich was leading the “Global Cyber Threat and Vulnerability Management practice” for Deloitte and Touche, which is not exactly a backwater.
  • “If you have a breach, you will be fired for it.” That one is a bit trickier. I’m certainly not going to assert that no one has ever been fired for a breach happening. But it’s also clearly untrue. The weaker version is “if you have a breach, you may be fired for it”, and again, that’s not useful or interesting.
  • “If you have a breach, it will improve your career.” That’s also obviously false, and the weaker version isn’t faslifiable. But perhaps the lessons learned, focus, and publicity around a breach can make it helpful to your career. It’s not obviously dumber than the opposite claims.

So overall, what is useful and interesting is that yet another myth around breaches turns out to be false. So let’s start saying a bit more about what went wrong, and learning more about what’s going wrong.

Finally, again, congratulations and good luck to Rich in his new role!

Aitel on Social Engineering

Yesterday, Dave Aitel wrote a fascinating article “Why you shouldn’t train employees for security awareness,” arguing that money spent on training employees about awareness is wasted.

While I don’t agree with everything he wrote, I submit that your opinion on this (and mine) are irrelevant. The key question is “Is money spent on security awareness a good way to spend your security budget?”

The key is we have data now. As someone comments:

[Y]ou somewhat missed the point of the phishing awareness program. I do agree that annual training is good at communicating policy – but horrible at raising a persistent level of awareness. Over the last 5 years, our studies have shown that annual IA training does very little to improve the awareness of users against social engineering based attacks (you got that one right). However we have shown a significant improvement in awareness as a result of running the phishing exercises (Carronade). These exercises sent fake phishing emails to our cadet population every few weeks (depends on the study). We demonstrated a very consistent reduction to an under 5% “failure” after two iterations. This impact is short lived though. After a couple months of not running the study, the results creep back up to 40% failure.

So, is a reduction in phishing failure to under 5% a good investment? Are there other investments that bring your failure rates lower?

As I pointed out in “The Evolution of Information Security” (context), we can get lots more data with almost zero investment.

If someone (say, GAO) obtains data on US Government department training programs, and cross-correlates that with incidents being reported to US-CERT, then we can assess the efficacy of those training programs

Opinions, including mine, Dave’s, and yours, just ain’t relevant in the face of data. We can answer well-framed questions of “which of these programs best improves security” and “is that improvement superior to other investments?”

The truth, in this instance, won’t set us free. Dave Mortman pointed out that a number of regulations may require awareness training as a best practice. But if we get data, we can, if needed, address the regulations.

If you’re outraged by Dave’s claims, prove him wrong. If you’re outraged by the need to spend money on social engineering, prove it’s a waste.

Put the energy to better use than flaming over a hypothesis.

Why Sharing Raw Data is Important

Bob Rudis has a nice post up “Off By One : The Importance Of Fact Checking Breach Reports,” in which he points out some apparent errors in the Massachusetts 2011 breach report, and also provides some graphs.

Issues like this are why it’s important to release data. It enables independent error checking, but also allows people to slice and dice the issues in ways that otherwise are only accessible to a privileged few with the raw numbers.

Checklists and Information Security

I’ve never been a fan of checklists. Too often, checklists replace thinking and consideration. In the book, Andrew and I wrote:

CardSystems had the required security certification, but its security was compromised, so where did things goo wrong? Frameworks such as PCI are built around checklists. Checklists compress complex issues into a list of simple questions. Someone using a checklist might therefore think he had done the right thing, when in fact he had not addressed the problems in depth…Conventional wisdom presented in short checklists makes security look easy.

So it took a while and a lot of recommendations for me to get around to reading “The Checklist Manifesto” by Atul Gawande. And I’ll admit, I enjoyed it. It’s a very well-written, fast-paced little book that’s garnered a lot of fans for very good reasons.

What’s more, much as it pains me to say it, I think that security can learn a lot from the Checklist Manifesto. One objection that I’ve had is that security is simply too complex. But so is the human body. From the Manifesto:

[It] is far from obvious that something as simple as a checklist could be of substantial help. We may admit that errors and oversights occur–even devastating ones. But we believe our jobs are too complicated to reduce to a checklist. Sick people, for instance, are phenomenally more various than airplanes. A study of forty-one thousand trauma patients in the state of Pennsylvania–just trauma patients–found that they had 1,224 different injury-related diagnoses in 32,261 unique combinations. That’s like having 32,261 kinds of airplane to land. Mapping out the proper steps for every case is not possible, and physicians have been skeptical that a piece of paper with a bunch of little boxes would improve matters.

The Manifesto also addresses the point we wrote above, that “someone using a checklist might think he’d done the right thing”:

Plus, people are individual in ways that rockets are not–they are complex. No two pneumonia patients are identical. Even with the same bacteria, the same cough and shortness of breath, the same low oxygen levels, the same antibiotic, one patient might get better and the other might not. A doctor must be prepared for unpredictable turns that checklists seem completely unsuited to address. Medicine contains the entire range of problems–the simple, the complicated, and the complex–and there are often times when a clinician has to just do what needs to be done. Forget the paperwork. Take care of the patient.

So it’s important to understand that checklists don’t replace professional judgement, they supplement it and help people remember complex steps under stress.

So while I think security can learn a lot from The Checklist Manifesto, the lessons may not be what you expect. Quoting the book that inspired this blog again:

A checklist implies that there is an authoritative list of the “right” things to do, even if no evidence of that simplicity exists. This in turn contributes to the notion that information security is a more mature discipline than it really is.

For example, turning back to the Manifesto:

Surgery has, essentially, four big killers wherever it is done in the world: infection, bleeding, unsafe anesthesia, and what can only be called the unexpected. For the first three, science and experience have given us some straightforward and valuable preventive measures we think we consistently follow but don’t.

I think what we need, before we get to checklists, is more data to understand what the equivalents of infection, bleeding and unsafe anesthesia are. Note that those categories didn’t spring out of someone’s mind, thinking things through from first principles. They came from data. And those data show that some risks are bigger than others:

But compared with the big global killers in surgery, such as infection, bleeding, and unsafe anesthesia, fire is exceedingly rare. Of the tens of millions of operations per year in the United States, it appears only about a hundred involve a surgical fire and vanishingly few of those a fatality. By comparison, some 300,000 operations result in a surgical site infection, and more than eight thousand deaths are associated with these infections. We have done far better at preventing fires than infections. [So fire risks are generally excluded from surgical checklists.]

Security has no way to exclude insiders the fire risk. We throw everything into lists like PCI. The group who updates PCI is not provided in depth incident reports about the failures that occurred over the last year or over the life of the failure. When security fails, rather than asking, ‘did the checklist work’, the PCI council declares that they’ve violated the 11th commandment, and are thus not compliant. And so we dan’t improve the checklists. (Compare and contrast: don’t miss the long section of the Manifesto on how Boeing tests and re-tests their checklists.)

One last quote before I close. Gawande surveys many fields, including how large buildings are built and delivered. He talks to a project manager putting up a huge new hospital building:

Joe Salvia had earlier told me that the major advance in the science of construction over the last few decades has been the perfection of tracking and communication.

Nothing for us security thought leaders to learn. But before I tell you to move along, I’d like to offer up an alpha-quality DO-CHECK checklist for improving security after an incident:

  1. Have you addressed the breach and gotten the attackers out?
  2. Have you notified your customers, shareholders, regulators and other stakeholders?
  3. Did you prepare an after-incident report?
  4. Did you use Veris, the taxonomy in Microsoft’s SIR v11 or some other way to clarify ambiguous terms?
  5. Have you released the report so others can learn?

I believe that if we all start using such a checklist, we’ll set up a feedback loop, and empower our future selves to make better, and more useful checklists to help us make things more secure.

Time for an Award for Best Data?

Yesterday, DAn Kaminsky said “There should be a yearly award for Best Security Data, for the best collection and disbursement of hard data and cogent analysis in infosec.” I think it’s a fascinating idea, but think that a yearly award may be premature. However, what I think is sorta irrelevant, absent data. So I’m looking for data on the question, do we have enough good data to issue an award yearly?

Please nominate in the comments.

Also, please discuss what the criteria should be.

Paper: The Security of Password Expiration

The security of modern password expiration: an algorithmic framework and empirical analysis, by Yingian Zhang, Fabian Monrose and Michael Reiter. (ACM DOI link)

This paper presents the first large-scale study of the success of password expiration in meeting its intended purpose, namely revoking access to an account by an attacker who has captured the account’s password. Using a dataset of over 7700 accounts, we assess the extent to which passwords that users choose to replace expired ones pose an obstacle to the attacker’s continued access. We develop a framework by which an attacker can search for a user’s new password from an old one, and design an efficient algorithm to build an approximately optimal search strategy. We then use this strategy to measure the difficulty of breaking newly chosen passwords from old ones. We believe our study calls into question the merit of continuing the practice of password expiration.

This is the sort of work that we at the New School love. Take a best practice recommended by just about everyone for what seems like excellent reasons, and take notice of the fact that human beings are going to game your practice. Then get some actual data, and see how effective the practice is.

Unfortunately, we lack data on rates of compromise for organizations with different password change policies. So it’s hard to tell if password policies actually do any good, or which ones do good. However, we can guess that not making your default password “stratfor” is a good idea.

ACM gets a link because they allow you to post copies of your own papers, rather than inhibiting the progress of science by locking it all up.

Top 5 Security Influencers of 2011

I really like Gunnar Peterson’s post on “Top 5 Security Influencers:”

Its December and so its the season for lists. Here is my list of Top 5 Security Influencers, this is the list with the people who have the biggest (good and/or bad) influence on your company and user’s security:

My list is slightly different:

  1. The Person Coding Your App
  2. Your DBA
  3. Your Testers
  4. Your Ops team
  5. The person with the data
  6. Uma Thurman
  7. You

That’s right, without data to argue an effective case for investing in security, you have less influence than Uma Thurman. And even if you have more influence than her, if you want to be in the top 5, you better be the person bringing the data.

As long as we’re hiding everything that might allow us to judge comparative effectiveness, we’re going to continue making no progress.

Ahh, but which Uma?
265446 1020 A
Update: Chris Hoff asks “But WHICH Uma? Kill Bill Uma or Pulp Fiction Uma?” and sadly, I have to answer: The Truth About Cats and Dogs Uma. You remember. Silly romantic comedy where guy falls in love with radio veterinarian Janeane Garofalo, who’s embarrassed about her looks? And Uma plays her gorgeous but vapid neighbor? That’s the Uma with the more influence than you. The one who spends time trying to not be bubbly when her audition for a newscaster job leads off with “hundreds of people feared dead in a nuclear accident?” Yeah. That Uma. Because at least she’s nice to look at while going on about stuff no one cares about. But you know? If you show up with some chops and some useful data to back your claims, you can do better than that.

On the downside, you’re unlikely to ever be as influential as Kill Bill Uma. Because, you know, she has a sword, and a demonstrated willingness to slice the heads off of people who argue with her, and a don’t-care attitude about jail. It’s hard to top that for short term influence. Just ask the 3rd guy trying to code your app, and hoping it doesn’t crash. He’s got eyes for no one not carrying that sword.