Bicycling & Risk

While everyone else is talking about APT, I want to talk about risk thinking versus outcome thinking.

I have a lot of colleagues who I respect who like to think about risk in some fascinating ways. For example, there’s the Risk Hose and SIRA folks.
I’m inspired by
To Encourage Biking, Cities Lose the Helmets:

In the United States the notion that bike helmets promote health and safety by preventing head injuries is taken as pretty near God’s truth. Un-helmeted cyclists are regarded as irresponsible, like people who smoke. Cities are aggressive in helmet promotion.

But many European health experts have taken a very different view: Yes, there are studies that show that if you fall off a bicycle at a certain speed and hit your head, a helmet can reduce your risk of serious head injury. But such falls off bikes are rare — exceedingly so in mature urban cycling systems.

On the other hand, many researchers say, if you force or pressure people to wear helmets, you discourage them from riding bicycles. That means more obesity, heart disease and diabetes. And — Catch-22 — a result is fewer ordinary cyclists on the road, which makes it harder to develop a safe bicycling network. The safest biking cities are places like Amsterdam and Copenhagen, where middle-aged commuters are mainstay riders and the fraction of adults in helmets is minuscule.

“Pushing helmets really kills cycling and bike-sharing in particular because it promotes a sense of danger that just isn’t justified.

Given that we don’t have statistics about infosec analogs to head injuries, nor obesity, I’m curious where can we make the best infosec analogy to bicycling and helmets? Where are our outcomes potentially worse because we focus on every little risk?

My favorite example is password change policies, where we absorb substantial amounts of everyone’s time without evidence that they’ll improve our outcomes.

What’s yours?

New paper: "How Bad Is It? — A Branching Activity Model for Breach Impact Estimation"

Adam just posted a question about CEO “willingness to pay” (WTP) to avoid bad publicity regarding a breach event.  As it happens, we just submitted a paper to Workshop on the Economics of Information Security (WEIS) that proposes a breach impact estimation method that might apply to Adam’s question.  We use the WTP approach in a specific way, by posing this question to all affected stakeholders:

Ex ante, how much would you be willing to spend on response and recovery for a breach of a particular type?  Through what specific activities and processes?”

We hope this approach can bridge theoretical and empirical research, and also professional practice.  We also hope that this method can be used in public disclosures.

Paper: How Bad is it? – A Branching Activity Model to Estimate the Impact of Information Security Breaches

Infographic from the example in the paper
Infographic from the example in the paper

In the next few months we will be applying this to half a dozen historical breach episodes to see how it works out.  This model will also probably find its way into my dissertation as “substrate”.  The dissertation focus is on social learning and institutional innovation.

Comments and feedback are most welcome.

Base Rate & Infosec

At SOURCE Seattle, I had the pleasure of seeing Jeff Lowder and Patrick Florer present on “The Base Rate Fallacy.” The talk was excellent, lining up the idea of the base rate fallacy, how and why it matters to infosec. What really struck me about this talk was that about a week before, I had read a presentation of the fallacy with exactly the same example in Kahneman’s “Thinking, Fast and Slow.” The problem is you have a witness who’s 80% accurate, describing a taxi as orange; what are the odds she’s right, given certain facts about the distribution of taxis in the city?

I had just read the discussion. I recognized the problem. I recognized that the numbers were the same. I recalled the answer. I couldn’t remember how to derive it, and got the damn thing wrong.

Well played, sirs! Game to Jeff and Patrick.

Beyond that, there’s an important general lesson in the talk. It’s easy to make mistakes. Even experts, primed for the problems, fall into traps and make mistakes. If we publish only our analysis (or worse, engage in information sharing), then others can’t see what mistakes we might have made along the way.

This problem is exacerbated in a great deal of work by a lack of a methodology section, or a lack of clear definitions.

The more we publish, the more people can catch one anothers errors, and the more the field can advance.

Aitel on Social Engineering

Yesterday, Dave Aitel wrote a fascinating article “Why you shouldn’t train employees for security awareness,” arguing that money spent on training employees about awareness is wasted.

While I don’t agree with everything he wrote, I submit that your opinion on this (and mine) are irrelevant. The key question is “Is money spent on security awareness a good way to spend your security budget?”

The key is we have data now. As someone comments:

[Y]ou somewhat missed the point of the phishing awareness program. I do agree that annual training is good at communicating policy – but horrible at raising a persistent level of awareness. Over the last 5 years, our studies have shown that annual IA training does very little to improve the awareness of users against social engineering based attacks (you got that one right). However we have shown a significant improvement in awareness as a result of running the phishing exercises (Carronade). These exercises sent fake phishing emails to our cadet population every few weeks (depends on the study). We demonstrated a very consistent reduction to an under 5% “failure” after two iterations. This impact is short lived though. After a couple months of not running the study, the results creep back up to 40% failure.

So, is a reduction in phishing failure to under 5% a good investment? Are there other investments that bring your failure rates lower?

As I pointed out in “The Evolution of Information Security” (context), we can get lots more data with almost zero investment.

If someone (say, GAO) obtains data on US Government department training programs, and cross-correlates that with incidents being reported to US-CERT, then we can assess the efficacy of those training programs

Opinions, including mine, Dave’s, and yours, just ain’t relevant in the face of data. We can answer well-framed questions of “which of these programs best improves security” and “is that improvement superior to other investments?”

The truth, in this instance, won’t set us free. Dave Mortman pointed out that a number of regulations may require awareness training as a best practice. But if we get data, we can, if needed, address the regulations.

If you’re outraged by Dave’s claims, prove him wrong. If you’re outraged by the need to spend money on social engineering, prove it’s a waste.

Put the energy to better use than flaming over a hypothesis.

Yet More On Threat Modeling: A Mini-Rant

Yesterday Adam responded to Alex’s question on what people thought about IanG’s claim that threat modeling fails in practice and I wanted to reiterate what I said on twitter about it:

It’s a tool! No one claimed it was a silver bullet!

Threat modeling is yet another input into an over all risk analysis. And you know what? Risk analysis/Risk management, whatever you want to call it won’t be perfect either. Threat modeling is in itself a model. All models are broken. We’ll get better at it.

But claiming that something is a failure because it’s not perfect and that it doesn’t always work, is one of the cardinal sins of infosec from my perspective. Every time we do that, we do ourselves and our industry a disservice. Stop letting the perfect be the enemy of the useful.

Aviation Safety

The past 10 years have been the best in the country’s aviation history with 153 fatalities. That’s two deaths for every 100 million passengers on commercial flights, according to an Associated Press analysis of government accident data.

The improvement is remarkable. Just a decade earlier, at the time the safest, passengers were 10 times as likely to die when flying on an American plane. The risk of death was even greater during the start of the jet age, with 1,696 people dying — 133 out of every 100 million passengers — from 1962 to 1971. The figures exclude acts of terrorism.


There are a number of reasons for the improvements.

  • The industry has learned from the past. New planes and engines are designed with prior mistakes in mind. Investigations of accidents have led to changes in procedures to ensure the same missteps don’t occur again.
  • Better sharing of information. New databases allow pilots, airlines, plane manufactures and regulators to track incidents and near misses. Computers pick up subtle trends. For instance, a particular runway might have a higher rate of aborted landings when there is fog. Regulators noticing this could improve lighting and add more time between landings.

(“It’s never been safer to fly; deaths at record low“, AP, link to Seattle PI version.)

Well, it seems there’s nothing for information security to learn here. Move along.

Discussing Norm Marks' GRC Wishlist for 2012

Norm Marks of the famous Marks On Governance blog has posted his 2012 wishlist.  His blog limits the characters you can leave in a reply, so I thought I’d post mine here.

1.  Norm Wishes for “A globally-accepted organizational governance code, encompassing both risk management and internal control”

Norm, if you mean encompassing both so that they are tightly coupled, I respectfully disagree.  Ethically, philosophically, these should be separate entities and ne’r the twain should meet.  Plus, accountants & auditors make poor actuaries.  See the SoA condemnation of RCSA.

Second, the problem with a globally accepted something is that it limits innovation.  We already have enough of this “we can’t do things right because we’ll have to justify doing things differently than the global priesthood says we have to” problem to deal with now.  Such documentation will only exacerbate the issue.

2.  Norm wishes for: “The convergence of the COSO ERM Framework and the global ISO 31000:2009 risk management standard.”

See #1, part 2 above.

3.  Norm wishes for:  “An update of the COSO Internal Control Framework that recognizes that internal controls are the organization’s response to uncertainty (i.e., risk), and you need the controls to ensure the likelihood and effects of uncertainty are within organizational tolerances.”

First, risk only equals uncertainty if you’re one of those stuck in the early 20th century Knightians.  For those that aren’t, and esp. actuaries and Bayesians alike, uncertainty is a factor in risk analysis – not the existence of risk.

Second, this wish seems to be beholden to the fundamental flaw of the Accounting Consultancy Industrial Complex – that Residual Risk = Inherent risk – Controls.  Let me ask you, what controls do you personally have against an asteroid slamming into your house?  But is that “high” risk?  Do you operate daily as if it’s “high” risk?  Why not?  Certainly you have weak controls, and most people would argue that their house and familys are of high value…

The reason it’s not “high risk” is because of frequency.  Yes, frequency matters in risk – and your RCSA process doesn’t (usually, formally) account for that.

4.) Norm wants “guidance that explains how you can set guidance on risk-taking that works not only for (a) the board and top management (who want to set overall limits), but also for (b) the people on the front lines who are the ones actually making decisions, accepting risks, and taking actions to manage the risks. The guidance also has to explain the need to measure and report on whether the actions taken on the front lines aggregate to levels within organizational tolerances.”

Great idea, but for this one to work, you’d have to establish guidance around reward-taking, tolerance, etc., too.

5.) Norm wants “A change to the opinion provided by the external auditors, from one focusing on compliance with GAAP to one focusing on whether the financial reports filed with the regulators provide a true and fair view of the results of operations, the condition of the organization, and the outlook for the future.”

I’m going with “bad idea” on this one.  Accountants != entrepreneurs.  Despite all their longing for control, power, and self-importance.

6.)  For Norm, Regulators should receive ” An opinion by management on the effectiveness of the enterprise-wide risk management program. This could be based on the assessment of the internal audit function”

I’m confused, how is the internal audit function in any way at all related to the quality of decision making?  Assurance is *an* evidence, a confidence value for specific risk factors.  It seems that Norm is saying that assurance is *the* evidence in total.

Frankly, very few accountants have training or exposure to probability theory, decision theory, or complexity theory.  Until they *do*, my wish for 2012 is that CPAs  reserve judgement on people trying to use real methods to solve real problems.

7.) Norm wants:  A change in attitude of investor groups, focusing on longer-term value instead of short-term results.

AGREED and +1 to you Norm!

In 10.) a, Norm desires that “audit engagements should be prioritized based on the risk to the organization and the value provided by an internal audit project.”

ABSOLUTELY NOT.  Unless Audit engagements are to be prioritized by the faulty idea of “Inherent Risk”.

Example, as a risk manager – I may have relatively stable frequency and magnitude of operational losses.  They may fall into a “low” tolerance range established by an ERMC or something.  But even though I am doing a good job (or really lucky) I may really be concerned about the process enough to warrant a high frequency of audit.  There are just so many concerns about this sort of approach by an auditor (from a risk/actuary standpoint) that I can’t disagree more.

In point 11 Norm’s wish is “An improved understanding by the board and top management of the value of internal audit as a provider of assurance relative to governance, risk management”

Me too, but I don’t think Norm and I agree on that “value.”

Again, for a mature risk management group, the value of assurance is simply the establishment of confidence values for certain inputs.  And frankly, if the board and top management understood that, I’m not sure Norm would really want them too, because many times the assurance is really a reinforcement of confidence/certainty, and frankly is a job that can easily be done with a risk model that reduces SME bias.

Finally, Norm “would like to see the term “GRC” disappear”

AMEN.  To use the ISACA/Audit terminology, Compliance is just “a risk.”  To use risk terminology, Compliance is a factor that contributes to secondary or indirect losses.

So, I’m with you – I’d like to see GRC taken out behind the shed.  Where I differ is that’s not because it becomes coupled with risk management, but rather because for me compliance aligns better with the authoritarian world of audit rather than a discipline like risk whose goal is to reduce subjectivity, or a discipline like governance whose role is to optimize resource expenditures.

The One Where David Lacey's Article On Risk Makes Us All Stupider

In possibly the worst article on risk assessment I’ve seen in a while, David Lacey of Computerworld gives us the “Six Myth’s Of Risk Assessment.”  This article is so patently bad, so heinously wrong, that it stuck in my caw enough to write this blog post.  So let’s discuss why Mr. Lacey has no clue what he’s writing about, shall we?

First Mr. Lacey writes:

1. Risk assessment is objective and repeatable

It is neither. Assessments are made by human beings on incomplete information with varying degrees of knowledge, bias and opinion. Groupthink will distort attempts to even this out through group sessions. Attitudes, priorities and awareness of risks also change over time, as do the threats themselves. So be suspicious of any assessments that appear to be consistent, as this might mask a lack of effort, challenge or review.

Sounds reasonable, no?  Except it’s not alltogether true.  Yes, if you’re doing idiotic RCSA of Inherent – Control = Residual, it’s probably as such, but those assessments aren’t the totality of current state.

“Objective” is such a loaded word.  And if you use it with me, I’m going to wonder if you know what you’re talking about.  Objectivity / Subjectivity is a spectrum, not a binary, and so for him to say that risk assessment isn’t “objective” is an “of course!”  Just like there is no “secure” there is no “objective.”

But Lacey’s misunderstanding of the term aside, let’s address the real question: “Can we deal with the subjectivity in assessment?”  The answer is a resounding “yes” if your model formalizes the factors that create risk and logically represents how they combine to create something with units of measure.  And not only will the right model and methods handle the subjectivity to a degree that is acceptable, you can know that you’ve arrived at something usable when assessment results become “blindly repeatable.”  And yes, Virginia, there are risk analysis methods that create consistently repeatable results for information security.

2. Security controls should be determined by a risk assessment

Not quite. A consideration of risks helps, but all decisions should be based on the richest set of information available, not just on the output of a risk assessment, which is essentially a highly crude reduction of a complex situation to a handful of sentences and a few numbers plucked out of the air. Risk assessment is a decision support aid, not a decision making tool. It helps you to justify your recommendations.

So the key here is “richest set of information available” – if your risk analysis leaves out key or “rich” information, it’s pretty much crap.  Your model doesn’t fit, your hypothesis is false, start over.  If you think that this is a trivial matter for him to not understand, I’ll offer that this is kind of the foundation of modern science.  And mind you, this guy was supposedly a big deal with BS7799.  Really.

4. Risk assessment prevents you spending too much money on security

Not in practice. Aside from one or two areas in the military field where ridiculous amounts of money were spent on unnecessary high end solutions (and they always followed a risk assessment), I’ve never encountered an information system that had too much security. In fact the only area I’ve seen excessive spending on security is on the risk assessment itself. Good security professionals have a natural instinct on where to spend the money. Non-professionals lack the knowledge to conduct an effective risk assessment.

This “myth” basically made me physically ill.  This statement “I’ve never encountered an information system that had too much security” made me laugh so hard I keeled over and hurt my knee in the process by slamming it on the little wheel thing on my chair.

Obviously Mr. Lacey never worked for one of my previous employers that forced 7 or so (known) endpoint security applications on every Windows laptop.  Of course you can have too much !@#%ing security!  It happens all the !@#%ing time.  We overspend where frequency and impact ( <- hey, risk!) don’t justify the spend.  If I had a nickel for every time I saw this in practice, I’d be a 1%er.

But more to the point, this phrase (never too much security) makes several assumptions about security that are patently false.  But let me focus on this one:  This statement implies that threats are randomly motivated.  You see, if a threat has targeted motivation (like IP or $) then they don’t care about systems that offer no value in data or in privilege escalation.  Thus, you can spend too much on protecting assets that offer no or limited value to a threat agent.

5. Risk assessment encourages enterprises to implement security

No, it generally operates the other way around. Risk assessment means not having to do security. You just decide that the risk is low and acceptable. This enables organisations to ignore security risks and still pass a compliance audit. Smart companies (like investment banks) can exploit this phenomenon to operate outside prudent limits.

I honestly have no idea what he’s saying here.  Seriously, this makes no sense.  Let me explain.  Risk assessment outcomes are neutral states of knowledge.  They may feed a state of wisdom decision around budget, compliance, and acceptance (addressing or transferring, too) but this is a logically separate task.

If it’s a totally separate decision process to deal with the risk, and he cannot recognize this is a separate modeling construct, these statements should be highly alarming to the reader.  It screams “THIS MAN IS AUTHORIZED BY A MAJOR MEDIA OUTLET TO SPEAK AS AN SME ON RISK AND HE IS VERY, VERY CONFUSED!!!!”

Then there is that whole thing at the end where he calls companies that address this process illogically as “smart.”  Deviously clever, I’ll give you, but not smart.

6. We should aspire to build a “risk culture” across our enterprises

Whatever that means it sounds sinister to me. Any culture built on fear is an unhealthy one. Risks are part of the territory of everyday business. Managers should be encouraged to take risks within safe limits set by their management.

So by the time I got to this “myth” my mind was literally buzzing with anger.  But then Mr. Lacey tops us off with this beauty.  This statement is so contradictory to his past “myth” assertions, is so bizarrely out of line with his last statement in any sort of deductive sense, that one has to wonder if David Lacey isn’t actually an information security surrealist or post-modernist who rejects ration, logic, and formality outright for the sake of random, disconnected and downright silly approaches to risk and security management. Because that’s the only way this statement could possibly make sense.  And I’m not talking “pro” or “con” for risk culture here, I’m just talking about how his mind could possibly conceptually balance the concept that an “enterprise risk culture” sounds sinister vs. “Managers should be encouraged to take risks within safe limits set by their management” and even “I’ve never encountered an information system that had too much security.”

(Mind blown – throws up hands in the air, screams AAAAAAAAAAAAAAAAAHHhHHHHHHHHHHHHH at the top of his lungs and runs down the hall of work as if on fire)

See?  Surrealism is the only possible explanation.

Of course, if he was an information security surrealist, this might explain BS7799.

 

 

 

What is Risk (again)?

The thread “What is Risk?came up on a linkedin Group. Thought you might enjoy my answer:

———————-

Risk != uncertainty (unless you’re a Knightian frequentist, and then you don’t believe in measurement anyway), though if you were to account for risk in an equation, the amount of uncertainty would be a factor.

risk != “likelihood” (to a statistician or probabilist anyway). Like uncertainty, likelihood has a specific meaning.

What is risk? It’s a hypothetical construct, something we synthesize in our brains to describe the danger inherent in the various inputs we’re processing around a certain situation. Depending on the situation, it can be either very difficult to create an “immaculate” equation for risk (such as R = T x V x I), or in many cases, impossible.

As an example, in IT, and especially for a large enterprise, we may have a complex adaptive system with characteristics of strong emergence. We see the same thing in medicine and various fields of biology. As such, point probabilities are pretty much impossible, or require so much simulation effort as to be difficult to produce.

Also, because it is a hypothetical construct we create in our brains, risk is also subject to the perspective of the observer. It is poly-centric. And because humans have a very difficult time divorcing their own risk tolerance derived from their own internal ad-hoc assessment from the information they have and the way they’re required to use that information, the true nature of the inherent danger, the majority of the “risk” is left unexpressed.

In my opinion, it’s important to dwell on these two pieces of information (risk may apply to a CAS with strong emergence, risk is subjective to the viewer) because they explain why the information security bureaucracies (ISACA, the ISO, NIST, most standards bodies, in fact) do us a huge disservice.

First, what our standards bodies do is typically do is enable us to justify our perspective by manipulating the inputs into a completely false model (jet engine x peanut butter = shiny!). This is the first significant way we give false (or at best, such poor information as to be incapable of creating a state of knowledge) information to decision makers.

Second, standards bodies, in the rush to provide value through “certification” have prematurely standardized processes to do “risk management.” This is the second way we are left giving false information to decision makers. Standardization without acknowledging the nature of risk (CAS, emergence, poly-centrism) results in the analyst ignoring critical pieces of the complex system that certainly contribute (sometimes significantly) to a full understanding of the situation.

Bottom line, IT risk is something created without being understood. It is the most important concept in information security, and the most abused. Until we have data, evidence of significant quality (see evidence-based practices) we cannot derive sane models, we cannot begin to understand the problem space.

As such, “risk” probably encompasses all of the above statements made in this thread, while in truth not resembling them at all (1).

————
1.) The thread was full of people explaining their “likelihood x impact” models. Variations on a theme, mainly.

Actually It *IS* Too Early For Fukushima Hindsight

OR – RISK ANALYSIS POST-INCIDENT, HOW TO DO IT RIGHT

Rob Graham called me out on something I retweeted here (seriously, who calls someone out on a retweet?  Who does that?):

http://erratasec.blogspot.com/2011/03/fukushima-too-soon-for-hindsight.html

And that’s cool, I’m a big boy, I can take it.  And Twitter doesn’t really give you a means to explain why you think that it’s too early to do a hindsight review of Fukushima, so I’ll use this forum.

Here’s the TL;DNR version: It’s too early to do hindsight or causal analysis on Fukushima – there is still a non-zero chance that something really bad could happen, we’re not at a point where the uncertainty in our information has stabilized, and any analysis done now would still be predictive about a future state.

But if you’re interested in the extended remix, there are several great reasons NOT to use Fukushima for a risk management case study just yet:

  1. Um, the incident isn’t over. It’s closer to contained, sure, but it’s not inconceivable that there’s more that could go seriously wrong.  Risk is both frequency and impact, an incident involves both primary and secondary threat agents.  Expanding our thinking to include these concepts, it’s not difficult to understand thatwe’re a long way from being “done” with Fukushima.
  2. Similarly, given the forthrightness of TEPCO, I’d bet we don’t know the whole story of what has happened, much less the current state. The information that has been revealed has so much uncertainty in it, it’s near incapable of being used in causal analysis (more on that, below).
  3. The complexity of the situation requires real thought, not quick judgment.

Now Rob doesn’t claim to be an expert in risk analysis (and neither do I, I just know how horribly I’ve failed in the past).  So we can’t blame him. But let’s discuss two basic reasons why Rob completely poops the bed on this one, why the entire blog post is wrong:  Post-incident, our analytics aren’t nearly as predictive as pre-incident or during incident analytics. They can still be predictive (addressing the remaining uncertainty in whatever prior distributions we’re using), but they are generally much more accurate.

Second, what Rob doesn’t seem to understand is that post-incident risk management is kind of like causal analysis, but (hopefully) with science involved.  It’s a different animal.

Post-incident risk analysis involves a basic model fit review, identifying why we weren’t precise in those likelihood (1) estimations Rob talks about. It’s in this manner that Jaynes describes probability theory as the logic of science, as the hypothesis you make (your “prediction”) most be examined and the model adjusted post-experiment.  It’s basic scientific method.  I don’t blame Rob for getting these concepts mixed up. I see it as a function of what Dan Geer calls “premature standardization”: our industry truly believes that the sum of risk management is only what is told to them by ISACA, the ISO, OCTAVE, and NIST about IT risk (as if InfoSec were the peak of probability theory and risk management knowledge).  This is another reason to question the value of the CRISC, if in the (yet unreleased) curriculum there’s no focus on model selection, no model fit determination, or any model adjustment.

So if the idea of doing hindsight is invalid because what we’re dealing with now is a  different animal than what we would be doing post-incident,   what do we do?

First, if you really want, make another predictive model.  The incident isn’t over, we don’t have all the facts, but if you really wanted to you could create another (time-framed) predictive model adjusted for the new information we do have.

Second, we wait.  As we wait we collect more information, do more review of the current model. But we wait for the incident to be resolved, and by resolved I mean where it’s apparent that we have about as much information as we’ll be able to gather.  THEN you do hindsight.

At the point of hindsight you review the two basic aspects of your risk model for accuracy – frequency and impact.  Does the impact of what happened match expectations?  To what degree are you off?  Did you account for radioactive spinach?  Did you account for panicky North Americans and Europeans buying up all the iodine pills?  You get the picture. If, as in this case, there may be long term ramifications do we make a new impact model (I think so, or at least create a  wholly new hypothesis about long term impact)?

Once you’re comfortable with impact review and any further estimation, you tackle the real bear – your “frequency” determination.  Now depending on if you’re prone to either Bayesian probabilities or Frequentist counts, you’ll be approaching the subject differently but have key similarities.  The good news is that despite the differing approaches the basic gist is:  Identify the factors or determinants you missed or were horribly wrong about.  This is difficult because more times than not in InfoSec, that 3rd bullet up there, the complexity thing, has ramifications.  Namely:

There usually isn’t one cause, but a series of causes that create the state of failure. (link is Dr. Robert Cook’s wonderful .pdf on failure)

In penetration testing (and Errata should know this of all people), it’s not just the red/blue team identifying one weakness and exploiting it and then #winning.  Usually (and especially in enterprise networks) there are sets of issues that cause a cascade of failures that lead to the ultimate incident.  It’s not just SQLi, it’s SQLi, malware, obtain creds, find/access conf. data, exfiltrate, anti-forensics (if you’re so inclined).  And that’s not even discussing tactics like “low and slow”.  Think about it, in that very simple incident description there, we can identify a host of controls, processes and policies (and I’m not even bringing error into the conversation) that can and do fail – each causing the emergent properties that lead to the next failure.

This dependency trail is being fleshed out for Fukushima right now, but we don’t know the whole story.  We certainly didn’t count on diesel generators to resist a tsunami, but then there was the incompatibility of the first back ups to arrive, the fact that nobody realized that in a big earthquake/tsunami it would create a transportation nightmare and it would take a week to get new power to the station, and there were probably dozens of other minor causes that created the whole of the incident.  Without being at an end-state determination where we have a relatively final amount of uncertainty around the number and nature of all causes,  it would be absurdly premature to start any sort of hindsight bias exercise.

It’s too early to do hindsight or causal analysis on Fukushima – there is still a non-zero chance that something really bad could happen, we’re not at a point where the uncertainty in our information has stabilized, and any analysis done now would still be predictive about a future state.

Finally, this: “The best risk management articles wouldn’t be about what went wrong, but what went right.” is just silly.  The best risk management articles are lessons learned so that we are wiser, not some self-congratulatory optimism.

1.  (BTW gentlereader, the word “likelihood” means something very different to statisticians.  Just another point where we have really premature standardization)