Author: alex

FEAR AND LOATHING IN SAN FRANCISCO (RSA PRE-GAME)

So it’s early Sunday AM, and I’m getting my RSA Schedule together finally.  So here’s what I’m looking forward to this week, leave us stuff in the comments if you’ve identified other cool stuff:

===============

Monday:  8 freaking AM – I’m talking with Rich Mogull of @securosis about Risk Management.  Fun!

Monday is also Metricon, this year run by Russ and Scott Crawford.  Should be good.

I’m capping my Monday off 4-5pm at BSides for this little gem:

Name: Dr. Mike Lloyd
Talk: Metrics That Don’t Suck: A New Way To Measure Security Effectiveness

===============

On the Tuesday, I’ll be speaking with Mortman, @csoandy, Ally Miller, Bob Blakely at the Risk Management Smackdown II:  Wrath of Kuhn

It’s in room 309.  Don’t know how this happens, but I get to be the dumbest person on the panel.

That afternoon, I’ll probably pop over to BSides to hear Wade Baker and Chris Porter talk, and @ch0rt is doing a part 2 to his Security Moneyball talk.

===============

On Wednesday, at 10am in room 309, I’ll be talking about Metrics.  Should be awesomesauce.  Don’t know how this happens, but I get to be the dumbest person on the panel (again).

===============

THURSDAY, THURSDAY, THURSDAY!!!!!

Preston Wood, Kelly White, and Mike Fowkes from Zions Bancorp are talking about their Hadoop install and Security Data Warehouse.  So, yeah.  The hype?  Pshaw, these guys are DOING IT.  GO.  Go to see this.  Srsly.

That afternoon, there’s a peer2peer risk management session going on.  Ally Miller and I are talking about Frameworks for some reason.

===============

FRIDAY

On Friday I gotta get down.  I’ll spend a large amount of my time trying to figure out if I should take the front seat, or kick it in the back seat.

Please Participate: Survey on Metrics

I got an email from my friend John Johnson who is doing a survey about metrics.  If you have some time, please respond…
————————————————————————————————————————————————
I am seeking feedback from others who may have experience developing and presenting security metrics to various stakeholders at their organization. I have a number of questions I’ve thought of, and put them into a simple survey form. I am  looking for any examples of the good, bad and ugly involved in developing meaningful metrics. What has worked well and what has failed miserably? How have you packaged and presented the results in a meaningful way to your executives?

If you can spare a few minutes, please consider taking this survey. Even if you answer one question, it is helpful!

https://docs.google.com/spreadsheet/viewform?formkey=dGhDLXZHQVB5eEZoSy03aU5JQnZxV2c6MQ

You may also simply share an example, graphics or slides via email. I will be using your feedback to facilitate peer discussions and in a presentation aimed at educating security professionals on how they can improve their security metrics program.

Thanks in advance,

John

Discussing Norm Marks' GRC Wishlist for 2012

Norm Marks of the famous Marks On Governance blog has posted his 2012 wishlist.  His blog limits the characters you can leave in a reply, so I thought I’d post mine here.

1.  Norm Wishes for “A globally-accepted organizational governance code, encompassing both risk management and internal control”

Norm, if you mean encompassing both so that they are tightly coupled, I respectfully disagree.  Ethically, philosophically, these should be separate entities and ne’r the twain should meet.  Plus, accountants & auditors make poor actuaries.  See the SoA condemnation of RCSA.

Second, the problem with a globally accepted something is that it limits innovation.  We already have enough of this “we can’t do things right because we’ll have to justify doing things differently than the global priesthood says we have to” problem to deal with now.  Such documentation will only exacerbate the issue.

2.  Norm wishes for: “The convergence of the COSO ERM Framework and the global ISO 31000:2009 risk management standard.”

See #1, part 2 above.

3.  Norm wishes for:  “An update of the COSO Internal Control Framework that recognizes that internal controls are the organization’s response to uncertainty (i.e., risk), and you need the controls to ensure the likelihood and effects of uncertainty are within organizational tolerances.”

First, risk only equals uncertainty if you’re one of those stuck in the early 20th century Knightians.  For those that aren’t, and esp. actuaries and Bayesians alike, uncertainty is a factor in risk analysis – not the existence of risk.

Second, this wish seems to be beholden to the fundamental flaw of the Accounting Consultancy Industrial Complex – that Residual Risk = Inherent risk – Controls.  Let me ask you, what controls do you personally have against an asteroid slamming into your house?  But is that “high” risk?  Do you operate daily as if it’s “high” risk?  Why not?  Certainly you have weak controls, and most people would argue that their house and familys are of high value…

The reason it’s not “high risk” is because of frequency.  Yes, frequency matters in risk – and your RCSA process doesn’t (usually, formally) account for that.

4.) Norm wants “guidance that explains how you can set guidance on risk-taking that works not only for (a) the board and top management (who want to set overall limits), but also for (b) the people on the front lines who are the ones actually making decisions, accepting risks, and taking actions to manage the risks. The guidance also has to explain the need to measure and report on whether the actions taken on the front lines aggregate to levels within organizational tolerances.”

Great idea, but for this one to work, you’d have to establish guidance around reward-taking, tolerance, etc., too.

5.) Norm wants “A change to the opinion provided by the external auditors, from one focusing on compliance with GAAP to one focusing on whether the financial reports filed with the regulators provide a true and fair view of the results of operations, the condition of the organization, and the outlook for the future.”

I’m going with “bad idea” on this one.  Accountants != entrepreneurs.  Despite all their longing for control, power, and self-importance.

6.)  For Norm, Regulators should receive ” An opinion by management on the effectiveness of the enterprise-wide risk management program. This could be based on the assessment of the internal audit function”

I’m confused, how is the internal audit function in any way at all related to the quality of decision making?  Assurance is *an* evidence, a confidence value for specific risk factors.  It seems that Norm is saying that assurance is *the* evidence in total.

Frankly, very few accountants have training or exposure to probability theory, decision theory, or complexity theory.  Until they *do*, my wish for 2012 is that CPAs  reserve judgement on people trying to use real methods to solve real problems.

7.) Norm wants:  A change in attitude of investor groups, focusing on longer-term value instead of short-term results.

AGREED and +1 to you Norm!

In 10.) a, Norm desires that “audit engagements should be prioritized based on the risk to the organization and the value provided by an internal audit project.”

ABSOLUTELY NOT.  Unless Audit engagements are to be prioritized by the faulty idea of “Inherent Risk”.

Example, as a risk manager – I may have relatively stable frequency and magnitude of operational losses.  They may fall into a “low” tolerance range established by an ERMC or something.  But even though I am doing a good job (or really lucky) I may really be concerned about the process enough to warrant a high frequency of audit.  There are just so many concerns about this sort of approach by an auditor (from a risk/actuary standpoint) that I can’t disagree more.

In point 11 Norm’s wish is “An improved understanding by the board and top management of the value of internal audit as a provider of assurance relative to governance, risk management”

Me too, but I don’t think Norm and I agree on that “value.”

Again, for a mature risk management group, the value of assurance is simply the establishment of confidence values for certain inputs.  And frankly, if the board and top management understood that, I’m not sure Norm would really want them too, because many times the assurance is really a reinforcement of confidence/certainty, and frankly is a job that can easily be done with a risk model that reduces SME bias.

Finally, Norm “would like to see the term “GRC” disappear”

AMEN.  To use the ISACA/Audit terminology, Compliance is just “a risk.”  To use risk terminology, Compliance is a factor that contributes to secondary or indirect losses.

So, I’m with you – I’d like to see GRC taken out behind the shed.  Where I differ is that’s not because it becomes coupled with risk management, but rather because for me compliance aligns better with the authoritarian world of audit rather than a discipline like risk whose goal is to reduce subjectivity, or a discipline like governance whose role is to optimize resource expenditures.

Particularly NewSchool Job Posting

From Keith Weinbaum, Director of Information Security of Quicken Loans Inc.

https://www.quickenloanscareers.com/web/ApplyNow.aspx?ReqID=53545

From the job posting:

WARNING:  If you believe in implementing security only for the sake of security or only for the sake of checking a box, then this is not the job for you.  ALSO, if your primary method of justifying security solutions is to sell FUD to decision makers, then we STRONGLY suggest that you close this page right now as it’s POSSIBLE that reading this job posting will infect your computer with a worm, virus, trojan, nasty bacterium, and/or bovine spongiform encephalopathy OH MY!!!  In fact, you should just stop using the scary interwebs all together!

Kudos, Keith.  You’ve made the Alex Hutton Personal Hall of Fame with this one.

 

The One Where David Lacey's Article On Risk Makes Us All Stupider

In possibly the worst article on risk assessment I’ve seen in a while, David Lacey of Computerworld gives us the “Six Myth’s Of Risk Assessment.”  This article is so patently bad, so heinously wrong, that it stuck in my caw enough to write this blog post.  So let’s discuss why Mr. Lacey has no clue what he’s writing about, shall we?

First Mr. Lacey writes:

1. Risk assessment is objective and repeatable

It is neither. Assessments are made by human beings on incomplete information with varying degrees of knowledge, bias and opinion. Groupthink will distort attempts to even this out through group sessions. Attitudes, priorities and awareness of risks also change over time, as do the threats themselves. So be suspicious of any assessments that appear to be consistent, as this might mask a lack of effort, challenge or review.

Sounds reasonable, no?  Except it’s not alltogether true.  Yes, if you’re doing idiotic RCSA of Inherent – Control = Residual, it’s probably as such, but those assessments aren’t the totality of current state.

“Objective” is such a loaded word.  And if you use it with me, I’m going to wonder if you know what you’re talking about.  Objectivity / Subjectivity is a spectrum, not a binary, and so for him to say that risk assessment isn’t “objective” is an “of course!”  Just like there is no “secure” there is no “objective.”

But Lacey’s misunderstanding of the term aside, let’s address the real question: “Can we deal with the subjectivity in assessment?”  The answer is a resounding “yes” if your model formalizes the factors that create risk and logically represents how they combine to create something with units of measure.  And not only will the right model and methods handle the subjectivity to a degree that is acceptable, you can know that you’ve arrived at something usable when assessment results become “blindly repeatable.”  And yes, Virginia, there are risk analysis methods that create consistently repeatable results for information security.

2. Security controls should be determined by a risk assessment

Not quite. A consideration of risks helps, but all decisions should be based on the richest set of information available, not just on the output of a risk assessment, which is essentially a highly crude reduction of a complex situation to a handful of sentences and a few numbers plucked out of the air. Risk assessment is a decision support aid, not a decision making tool. It helps you to justify your recommendations.

So the key here is “richest set of information available” – if your risk analysis leaves out key or “rich” information, it’s pretty much crap.  Your model doesn’t fit, your hypothesis is false, start over.  If you think that this is a trivial matter for him to not understand, I’ll offer that this is kind of the foundation of modern science.  And mind you, this guy was supposedly a big deal with BS7799.  Really.

4. Risk assessment prevents you spending too much money on security

Not in practice. Aside from one or two areas in the military field where ridiculous amounts of money were spent on unnecessary high end solutions (and they always followed a risk assessment), I’ve never encountered an information system that had too much security. In fact the only area I’ve seen excessive spending on security is on the risk assessment itself. Good security professionals have a natural instinct on where to spend the money. Non-professionals lack the knowledge to conduct an effective risk assessment.

This “myth” basically made me physically ill.  This statement “I’ve never encountered an information system that had too much security” made me laugh so hard I keeled over and hurt my knee in the process by slamming it on the little wheel thing on my chair.

Obviously Mr. Lacey never worked for one of my previous employers that forced 7 or so (known) endpoint security applications on every Windows laptop.  Of course you can have too much !@#%ing security!  It happens all the !@#%ing time.  We overspend where frequency and impact ( <- hey, risk!) don’t justify the spend.  If I had a nickel for every time I saw this in practice, I’d be a 1%er.

But more to the point, this phrase (never too much security) makes several assumptions about security that are patently false.  But let me focus on this one:  This statement implies that threats are randomly motivated.  You see, if a threat has targeted motivation (like IP or $) then they don’t care about systems that offer no value in data or in privilege escalation.  Thus, you can spend too much on protecting assets that offer no or limited value to a threat agent.

5. Risk assessment encourages enterprises to implement security

No, it generally operates the other way around. Risk assessment means not having to do security. You just decide that the risk is low and acceptable. This enables organisations to ignore security risks and still pass a compliance audit. Smart companies (like investment banks) can exploit this phenomenon to operate outside prudent limits.

I honestly have no idea what he’s saying here.  Seriously, this makes no sense.  Let me explain.  Risk assessment outcomes are neutral states of knowledge.  They may feed a state of wisdom decision around budget, compliance, and acceptance (addressing or transferring, too) but this is a logically separate task.

If it’s a totally separate decision process to deal with the risk, and he cannot recognize this is a separate modeling construct, these statements should be highly alarming to the reader.  It screams “THIS MAN IS AUTHORIZED BY A MAJOR MEDIA OUTLET TO SPEAK AS AN SME ON RISK AND HE IS VERY, VERY CONFUSED!!!!”

Then there is that whole thing at the end where he calls companies that address this process illogically as “smart.”  Deviously clever, I’ll give you, but not smart.

6. We should aspire to build a “risk culture” across our enterprises

Whatever that means it sounds sinister to me. Any culture built on fear is an unhealthy one. Risks are part of the territory of everyday business. Managers should be encouraged to take risks within safe limits set by their management.

So by the time I got to this “myth” my mind was literally buzzing with anger.  But then Mr. Lacey tops us off with this beauty.  This statement is so contradictory to his past “myth” assertions, is so bizarrely out of line with his last statement in any sort of deductive sense, that one has to wonder if David Lacey isn’t actually an information security surrealist or post-modernist who rejects ration, logic, and formality outright for the sake of random, disconnected and downright silly approaches to risk and security management. Because that’s the only way this statement could possibly make sense.  And I’m not talking “pro” or “con” for risk culture here, I’m just talking about how his mind could possibly conceptually balance the concept that an “enterprise risk culture” sounds sinister vs. “Managers should be encouraged to take risks within safe limits set by their management” and even “I’ve never encountered an information system that had too much security.”

(Mind blown – throws up hands in the air, screams AAAAAAAAAAAAAAAAAHHhHHHHHHHHHHHHH at the top of his lungs and runs down the hall of work as if on fire)

See?  Surrealism is the only possible explanation.

Of course, if he was an information security surrealist, this might explain BS7799.

 

 

 

Some Thoughts on Binary Risk Assessment

Ben Sapiro showed off his Binary Risk Assessment (BRA) at SecTor recently.   While I didn’t see the presentation, I’ve taken some time and reviewed the slides and read through the documentation.  I thought I’d quickly give my thoughts on this:

It’s awesome and it sucks.

IT’S AWESOME

That’s not damning with faint praise, rather, it’s acknowledging that it’s not really “risk” but is a useful tool if your goal is to be quick and dirty about vulnerability severity.

In other words, this is much better than CVSS, and should probably replace it immediately.

TILTING AFTER THE WRONG WINDMILLS

In fact, it’s a shame that Ben chose to compare this to OCTAVE, FAIR, SOMAP and others.  Because if he positioned this as “stop screwing around with CVSS” and “not really risk but a vuln rating” I would be telling everyone how much I liked it in that role.

In addition, if he positioned it with the Accounting/Audit Industrial Complex as good tool in the toolbox to compete with^H^H^H^H^H^H  augment their RCSA nonsense, I could probably welcome it there, as well (though not as an optimal solution).

The power of BRA is the fact that Ben chose to make things “binary”.  I can see this simple approach working well because it doesn’t allow you granularity – none of this arguing over “Moderate” or “Moderate-High” – just yes/no.

Also, Ben’s done a really good job thinking through what creates risk.  For the FAIR familiar there’s the concepts of TCap and Control/Resistance Strength.  I like that.

IT SUCKS

Speaking of subjectivity, I believe that Ben uses the power of binary choice to suggest that BRA “highlights” subjectivity.  Not to be a rude pedagogue, but it really doesn’t “highlight” subjectivity as much as it just doesn’t give you many choices as to where to “put” that subjective measurement.  Everything about it is still subjective (but that’s OK), and to reduce (or as I would rather “address properly”) that subjectivity would take more complexity than I believe Ben wanted to build (again, that’s OK).

As such at the end of the day, Ben’s right, it’s never going to be a replacement for what he calls “complex” analysis methodologies.  And because it doesn’t properly address subjectivity, BRA is not for formal risk or threat modeling.  I could never use it in my current capacity, as BRA just leaves a few too many questions unanswered.  I don’t have time for the arguing, I just want your SME estimate, throw that puppy into OpenPERT and be done with it.

Furthermore, it’s odd because even though BRA suggests that it is designed to   to “not ask anyone to guess on event frequency in the absence of statistical data (whatever that is)” it seems Ben’s intellectual honesty still could not let him escape the need to highlight it.  If you look at BRA the model, that occurrence thing there, yeah, that’s frequency.  It’s just a “binary” frequency determination which means….

BRA only talks about what’s possible.  

As a risk model, this is the point at which we reference the Tacoma Narrows suspension bridge that oscillated wildly in the wind.  Constructed nicely and all, but a small fundamental flaw in design renders it crazy bad for its purpose.

Also, impact is difficult for me to buy into because it uses asset value.  I hate to break it to you, but asset value mainly matters to threat motivation modeling.  The accounting value of the asset is RARELY the same as the losses we actually realize.

CHOOSE IT OR CHUCK IT?

So it wouldn’t be a review without such criticism.  This is one reason I hate reviewing things, because it is a critical process.  So please note that the above isn’t said with malice, it’s just an examination of the model itself.

In fact, as a tool, I wouldn’t dismiss it just yet.  If your security group isn’t formally into risk, is stuck doing too much with CVSS for too little return, I’d jump all over this.  If you have bigger fish to fry than an enterprise risk assessment but have the regulatory duty to create a risk register, BRA might just be the thing.  If you find yourself faced with an absurd RCSA from audit or something – I might whip out the sweet BRA iPad app and run a scenario or two through.   If I actually wanted a risk analysis, however, I would go elsewhere.

Navigation