In possibly the worst article on risk assessment I’ve seen in a while, David Lacey of Computerworld gives us the “Six Myth’s Of Risk Assessment.” This article is so patently bad, so heinously wrong, that it stuck in my caw enough to write this blog post. So let’s discuss why Mr. Lacey has no clue what he’s writing about, shall we?
First Mr. Lacey writes:
1. Risk assessment is objective and repeatable
It is neither. Assessments are made by human beings on incomplete information with varying degrees of knowledge, bias and opinion. Groupthink will distort attempts to even this out through group sessions. Attitudes, priorities and awareness of risks also change over time, as do the threats themselves. So be suspicious of any assessments that appear to be consistent, as this might mask a lack of effort, challenge or review.
Sounds reasonable, no? Except it’s not alltogether true. Yes, if you’re doing idiotic RCSA of Inherent – Control = Residual, it’s probably as such, but those assessments aren’t the totality of current state.
“Objective” is such a loaded word. And if you use it with me, I’m going to wonder if you know what you’re talking about. Objectivity / Subjectivity is a spectrum, not a binary, and so for him to say that risk assessment isn’t “objective” is an “of course!” Just like there is no “secure” there is no “objective.”
But Lacey’s misunderstanding of the term aside, let’s address the real question: “Can we deal with the subjectivity in assessment?” The answer is a resounding “yes” if your model formalizes the factors that create risk and logically represents how they combine to create something with units of measure. And not only will the right model and methods handle the subjectivity to a degree that is acceptable, you can know that you’ve arrived at something usable when assessment results become “blindly repeatable.” And yes, Virginia, there are risk analysis methods that create consistently repeatable results for information security.
2. Security controls should be determined by a risk assessment
Not quite. A consideration of risks helps, but all decisions should be based on the richest set of information available, not just on the output of a risk assessment, which is essentially a highly crude reduction of a complex situation to a handful of sentences and a few numbers plucked out of the air. Risk assessment is a decision support aid, not a decision making tool. It helps you to justify your recommendations.
So the key here is “richest set of information available” – if your risk analysis leaves out key or “rich” information, it’s pretty much crap. Your model doesn’t fit, your hypothesis is false, start over. If you think that this is a trivial matter for him to not understand, I’ll offer that this is kind of the foundation of modern science. And mind you, this guy was supposedly a big deal with BS7799. Really.
4. Risk assessment prevents you spending too much money on security
Not in practice. Aside from one or two areas in the military field where ridiculous amounts of money were spent on unnecessary high end solutions (and they always followed a risk assessment), I’ve never encountered an information system that had too much security. In fact the only area I’ve seen excessive spending on security is on the risk assessment itself. Good security professionals have a natural instinct on where to spend the money. Non-professionals lack the knowledge to conduct an effective risk assessment.
This “myth” basically made me physically ill. This statement “I’ve never encountered an information system that had too much security” made me laugh so hard I keeled over and hurt my knee in the process by slamming it on the little wheel thing on my chair.
Obviously Mr. Lacey never worked for one of my previous employers that forced 7 or so (known) endpoint security applications on every Windows laptop. Of course you can have too much !@#%ing security! It happens all the !@#%ing time. We overspend where frequency and impact ( <- hey, risk!) don’t justify the spend. If I had a nickel for every time I saw this in practice, I’d be a 1%er.
But more to the point, this phrase (never too much security) makes several assumptions about security that are patently false. But let me focus on this one: This statement implies that threats are randomly motivated. You see, if a threat has targeted motivation (like IP or $) then they don’t care about systems that offer no value in data or in privilege escalation. Thus, you can spend too much on protecting assets that offer no or limited value to a threat agent.
5. Risk assessment encourages enterprises to implement security
No, it generally operates the other way around. Risk assessment means not having to do security. You just decide that the risk is low and acceptable. This enables organisations to ignore security risks and still pass a compliance audit. Smart companies (like investment banks) can exploit this phenomenon to operate outside prudent limits.
I honestly have no idea what he’s saying here. Seriously, this makes no sense. Let me explain. Risk assessment outcomes are neutral states of knowledge. They may feed a state of wisdom decision around budget, compliance, and acceptance (addressing or transferring, too) but this is a logically separate task.
If it’s a totally separate decision process to deal with the risk, and he cannot recognize this is a separate modeling construct, these statements should be highly alarming to the reader. It screams “THIS MAN IS AUTHORIZED BY A MAJOR MEDIA OUTLET TO SPEAK AS AN SME ON RISK AND HE IS VERY, VERY CONFUSED!!!!”
Then there is that whole thing at the end where he calls companies that address this process illogically as “smart.” Deviously clever, I’ll give you, but not smart.
6. We should aspire to build a “risk culture” across our enterprises
Whatever that means it sounds sinister to me. Any culture built on fear is an unhealthy one. Risks are part of the territory of everyday business. Managers should be encouraged to take risks within safe limits set by their management.
So by the time I got to this “myth” my mind was literally buzzing with anger. But then Mr. Lacey tops us off with this beauty. This statement is so contradictory to his past “myth” assertions, is so bizarrely out of line with his last statement in any sort of deductive sense, that one has to wonder if David Lacey isn’t actually an information security surrealist or post-modernist who rejects ration, logic, and formality outright for the sake of random, disconnected and downright silly approaches to risk and security management. Because that’s the only way this statement could possibly make sense. And I’m not talking “pro” or “con” for risk culture here, I’m just talking about how his mind could possibly conceptually balance the concept that an “enterprise risk culture” sounds sinister vs. “Managers should be encouraged to take risks within safe limits set by their management” and even “I’ve never encountered an information system that had too much security.”
(Mind blown – throws up hands in the air, screams AAAAAAAAAAAAAAAAAHHhHHHHHHHHHHHHH at the top of his lungs and runs down the hall of work as if on fire)
See? Surrealism is the only possible explanation.
Of course, if he was an information security surrealist, this might explain BS7799.