The One Where David Lacey's Article On Risk Makes Us All Stupider

In possibly the worst article on risk assessment I’ve seen in a while, David Lacey of Computerworld gives us the “Six Myth’s Of Risk Assessment.”  This article is so patently bad, so heinously wrong, that it stuck in my caw enough to write this blog post.  So let’s discuss why Mr. Lacey has no clue what he’s writing about, shall we?

First Mr. Lacey writes:

1. Risk assessment is objective and repeatable

It is neither. Assessments are made by human beings on incomplete information with varying degrees of knowledge, bias and opinion. Groupthink will distort attempts to even this out through group sessions. Attitudes, priorities and awareness of risks also change over time, as do the threats themselves. So be suspicious of any assessments that appear to be consistent, as this might mask a lack of effort, challenge or review.

Sounds reasonable, no?  Except it’s not alltogether true.  Yes, if you’re doing idiotic RCSA of Inherent – Control = Residual, it’s probably as such, but those assessments aren’t the totality of current state.

“Objective” is such a loaded word.  And if you use it with me, I’m going to wonder if you know what you’re talking about.  Objectivity / Subjectivity is a spectrum, not a binary, and so for him to say that risk assessment isn’t “objective” is an “of course!”  Just like there is no “secure” there is no “objective.”

But Lacey’s misunderstanding of the term aside, let’s address the real question: “Can we deal with the subjectivity in assessment?”  The answer is a resounding “yes” if your model formalizes the factors that create risk and logically represents how they combine to create something with units of measure.  And not only will the right model and methods handle the subjectivity to a degree that is acceptable, you can know that you’ve arrived at something usable when assessment results become “blindly repeatable.”  And yes, Virginia, there are risk analysis methods that create consistently repeatable results for information security.

2. Security controls should be determined by a risk assessment

Not quite. A consideration of risks helps, but all decisions should be based on the richest set of information available, not just on the output of a risk assessment, which is essentially a highly crude reduction of a complex situation to a handful of sentences and a few numbers plucked out of the air. Risk assessment is a decision support aid, not a decision making tool. It helps you to justify your recommendations.

So the key here is “richest set of information available” – if your risk analysis leaves out key or “rich” information, it’s pretty much crap.  Your model doesn’t fit, your hypothesis is false, start over.  If you think that this is a trivial matter for him to not understand, I’ll offer that this is kind of the foundation of modern science.  And mind you, this guy was supposedly a big deal with BS7799.  Really.

4. Risk assessment prevents you spending too much money on security

Not in practice. Aside from one or two areas in the military field where ridiculous amounts of money were spent on unnecessary high end solutions (and they always followed a risk assessment), I’ve never encountered an information system that had too much security. In fact the only area I’ve seen excessive spending on security is on the risk assessment itself. Good security professionals have a natural instinct on where to spend the money. Non-professionals lack the knowledge to conduct an effective risk assessment.

This “myth” basically made me physically ill.  This statement “I’ve never encountered an information system that had too much security” made me laugh so hard I keeled over and hurt my knee in the process by slamming it on the little wheel thing on my chair.

Obviously Mr. Lacey never worked for one of my previous employers that forced 7 or so (known) endpoint security applications on every Windows laptop.  Of course you can have too much !@#%ing security!  It happens all the !@#%ing time.  We overspend where frequency and impact ( <- hey, risk!) don’t justify the spend.  If I had a nickel for every time I saw this in practice, I’d be a 1%er.

But more to the point, this phrase (never too much security) makes several assumptions about security that are patently false.  But let me focus on this one:  This statement implies that threats are randomly motivated.  You see, if a threat has targeted motivation (like IP or $) then they don’t care about systems that offer no value in data or in privilege escalation.  Thus, you can spend too much on protecting assets that offer no or limited value to a threat agent.

5. Risk assessment encourages enterprises to implement security

No, it generally operates the other way around. Risk assessment means not having to do security. You just decide that the risk is low and acceptable. This enables organisations to ignore security risks and still pass a compliance audit. Smart companies (like investment banks) can exploit this phenomenon to operate outside prudent limits.

I honestly have no idea what he’s saying here.  Seriously, this makes no sense.  Let me explain.  Risk assessment outcomes are neutral states of knowledge.  They may feed a state of wisdom decision around budget, compliance, and acceptance (addressing or transferring, too) but this is a logically separate task.

If it’s a totally separate decision process to deal with the risk, and he cannot recognize this is a separate modeling construct, these statements should be highly alarming to the reader.  It screams “THIS MAN IS AUTHORIZED BY A MAJOR MEDIA OUTLET TO SPEAK AS AN SME ON RISK AND HE IS VERY, VERY CONFUSED!!!!”

Then there is that whole thing at the end where he calls companies that address this process illogically as “smart.”  Deviously clever, I’ll give you, but not smart.

6. We should aspire to build a “risk culture” across our enterprises

Whatever that means it sounds sinister to me. Any culture built on fear is an unhealthy one. Risks are part of the territory of everyday business. Managers should be encouraged to take risks within safe limits set by their management.

So by the time I got to this “myth” my mind was literally buzzing with anger.  But then Mr. Lacey tops us off with this beauty.  This statement is so contradictory to his past “myth” assertions, is so bizarrely out of line with his last statement in any sort of deductive sense, that one has to wonder if David Lacey isn’t actually an information security surrealist or post-modernist who rejects ration, logic, and formality outright for the sake of random, disconnected and downright silly approaches to risk and security management. Because that’s the only way this statement could possibly make sense.  And I’m not talking “pro” or “con” for risk culture here, I’m just talking about how his mind could possibly conceptually balance the concept that an “enterprise risk culture” sounds sinister vs. “Managers should be encouraged to take risks within safe limits set by their management” and even “I’ve never encountered an information system that had too much security.”

(Mind blown – throws up hands in the air, screams AAAAAAAAAAAAAAAAAHHhHHHHHHHHHHHHH at the top of his lungs and runs down the hall of work as if on fire)

See?  Surrealism is the only possible explanation.

Of course, if he was an information security surrealist, this might explain BS7799.




What's Wrong and What To Do About It?

Pike floyd
Let me start with an extended quote from “Why I Feel Bad for the Pepper-Spraying Policeman, Lt. John Pike“:

They are described in one July 2011 paper by sociologist Patrick Gillham called, “Securitizing America.” During the 1960s, police used what was called “escalated force” to stop protesters.

“Police sought to maintain law and order often trampling on protesters’ First Amendment rights, and frequently resorted to mass and unprovoked arrests and the overwhelming and indiscriminate use of force,” Gillham writes and TV footage from the time attests. This was the water cannon stage of police response to protest.

But by the 1970s, that version of crowd control had given rise to all sorts of problems and various departments went in “search for an alternative approach.” What they landed on was a paradigm called “negotiated management.” Police forces, by and large, cooperated with protesters who were willing to give major concessions on when and where they’d march or demonstrate. “Police used as little force as necessary to protect people and property and used arrests only symbolically at the request of activists or as a last resort and only against those breaking the law,” Gillham writes.

That relatively cozy relationship between police and protesters was an uneasy compromise that was often tested by small groups of “transgressive” protesters who refused to cooperate with authorities. They often used decentralized leadership structures that were difficult to infiltrate, co-opt, or even talk with. Still, they seemed like small potatoes.

Then came the massive and much-disputed 1999 WTO protests. Negotiated management was seen to have totally failed and it cost the police chief his job and helped knock the mayor from office. “It can be reasonably argued that these protests, and the experiences of the Seattle Police Department in trying to manage them, have had a more profound effect on modern policing than any other single event prior to 9/11,” former Chicago police officer and Western Illinois professor Todd Lough argued.

Former Seattle police chief Norm Stamper gives his perspective in “Paramilitary Policing From Seattle to Occupy Wall Street“:

“We have to clear the intersection,” said the field commander. “We have to clear the intersection,” the operations commander agreed, from his bunker in the Public Safety Building. Standing alone on the edge of the crowd, I, the chief of police, said to myself, “We have to clear the intersection.”


Because of all the what-ifs. What if a fire breaks out in the Sheraton across the street? What if a woman goes into labor on the seventeenth floor of the hotel? What if a heart patient goes into cardiac arrest in the high-rise on the corner? What if there’s a stabbing, a shooting, a serious-injury traffic accident? How would an aid car, fire engine or police cruiser get through that sea of people? The cop in me supported the decision to clear the intersection. But the chief in me should have vetoed it. And he certainly should have forbidden the indiscriminate use of tear gas to accomplish it, no matter how many warnings we barked through the bullhorn.

My support for a militaristic solution caused all hell to break loose. Rocks, bottles and newspaper racks went flying. Windows were smashed, stores were looted, fires lighted; and more gas filled the streets, with some cops clearly overreacting, escalating and prolonging the conflict. The “Battle in Seattle,” as the WTO protests and their aftermath came to be known, was a huge setback—for the protesters, my cops, the community.

Product reviews on Amazon for the Defense Technology 56895 MK-9 Stream pepper spray are funny, as is the Pepper Spraying Cop Tumblr feed.

But we have a real problem here. It’s not the pepper spray that makes me want to cry, it’s how mutually-reinforcing up a set of interlocking systems have become. It’s the police thinking they can arrest peaceful people for protesting, or for taking video of them It’s a court system that’s turned “deference” into a spineless art, even when it’s Supreme Court justices getting shoved aside in their role as legal observers. It’s a political system where we can’t even agree to ban the TSA, or work out a non-arbitrary deal on cutting spending. It’s a set of corporatist best practices that allow the system to keep on churning along despite widespread revulsion.

So what do we do about it? Civil comments welcome. Venting welcome. Just keep it civil with respect to other commenters.

Image: Pike Floyd, by Kosso K

Twitter Updates from Adam, 2011-11-24

Powered by Twitter Tools

Twitter Updates from Adam, 2011-11-23

  • NYTimes reports man bites dog, I mean "Screening Still a Pain at Airports, Fliers Say" #
  • New School blog post, "AT&T Hack Attempt" I'm looking for polling software #
  • I missed a great opportunity in a recent podcast to say "controls implemented in a way that makes both auditors & attackers happy" #
  • RT @oneraindrop @adamshostack Union of Attacker wishlist & Auditor checklist = Best Practices #
  • RT @slaveSqueakAlot Remember when Westboro Baptist Church was picketing funerals/being asshats & they got pepper sprayed? Yeah, me neither #

Powered by Twitter Tools

AT&T Hack Attempt

First, good on AT&T for telling people that there’s been an attempt to hack their account. (My copy of the letter that was sent is after the break.) I’m curious what we can learn by discussing the attack.

An AT&T spokesperson told Fox News that “Fewer than 1 percent of customers were targeted.”

I’m currently aware of 3 other folks in the security industry who’ve gotten these. Can someone recommend a good embeddable polling software that I might use to see what the prevalence is on the biased audience that reads this blog?

Continue reading

Twitter Updates from Adam, 2011-11-20

  • New School blog post "Privacy is Security, Part LXII: The Steakhouse" #
  • MT @_nomap More on [obvious] Saudi airport fingerprint fail. It was mostly immigrant workers stranded for 12 hours. #
  • MT @dgwbirch Heard on BBC that poor people use cash, end up paying up to £185 per annum more for utilities << end the privacy tax now! #
  • RT @dgwbirch @adamshostack oh no, having you been playing that "Privacy" game from the UK? << I've been fighting to find time! #

Powered by Twitter Tools