Tag: risk

The lumbering ogre of Enterprise Governance is no replacement for real Quality Management.

Gideon Rasmussen, CISSP, CISA, CISM, CIPP, writes in his latest blog post (http://www.gideonrasmussen.com/article-22.html) about the BP Oil spill and operational risk, and the damages the spill is causing BP.  Ignoring the hindsight bias of the article here…

“This oil spill is a classic example of a black swan (events with the potential for severe impact to business and a low rate of occurrence)[vi].”

No.  No it’s not.  A Black Swan is something for which our prior distributions are completely uninformative.  In this case there was plenty of prior information about Deepwater, both from a “macro-analytical” standpoint (frequency & impact of oil well accidents) and from a “micro-analytical” standpoint (there were plenty of warnings about mis-management leading right up to the spill).

Now some of you readers will be thinking “there goes Alex again, waging war against Taleb’s stupid mischaracterization of ‘black swan'” and yes, Gideon is using “black swan” when he means “tail event” – I don’t blame him for that, it’s a common error perpetuated by that awful book.  Bear with me….

That’s not my point today.  What is important is this:

We (the risk & data analysts of the world) need to be really careful about how we’re communicating to management.  Saying that Deepwater was a “Black Swan”  or more properly, a “tail event” can allow someone to think that they just got “unlucky”.  This is crap.  BP did not get unlucky, they got cheap, lazy, and sloppy.  And not just at the well, either.  If (and this is just an “if”) upper management’s tolerance for risk was NOT reflected by the singular judgement calls made to circumvent appropriate safety controls, then upper management suffered what some would call a “governance” problem (I use the term very begrudgingly here – more on that in a bit), and a significant one at that.  And since rant mode is on, let me explain that this is one thing that bugs me about IT or Op risk assessments – the impact of organizational behavior is rarely taken into account.  Take, for example, R=TxVxI (please?).  “V” is not just the weakness in the system we see, it is a cocktail of operational skills, resources, management (don’t make me say governance here, please), and yes, even “luck”.

SO the lesson here might just be that risk communication (and before you go there, IHMO COSO is self-defeating – see below) is a significant part of the risk analysis determination.  We security people focus on “upwards” communication of risk – trying to educate C-levels about the dangers they face.  But I’d bet that if an organization is incapable of communicating tolerance effectively from the top down, then they are likely to have more problems than those that don’t. There can be a time-lapse problem (Jaynesian entropy if I can use that term) between the operational happenings (what’s going on at the well) and the ability of those ultimately accountable (sr. mgmt) to detect, respond, and prevent risk issues from happening.

Even worse?  We’re keen on adding more bureaucracy to solve the communications problem in the name of “recognizing” and “managing” risk (GRC, ERM councils, Legal departments, bleh).  But in an organization the size of BP, a “GRC Dashboard” just isn’t going to solve the “micro-analytical” problems faced at Deepwater (assuming that BP executive management would have had a lower tolerance for probable incidents than the decision makers at the well).

The lumbering ogre of Enterprise Governance is no replacement for real quality management.

One can only imagine if BP had an Operational Risk Program like our standards and consultants tell us we should be operating.  What are the chances that the problems at the well would have been politically covered up, or been part of a 24 month “Enterprise Risk Assessment”, with Deepwater’s issues being one of (hundreds of?) thousands of individual risk issues documented very nicely and expensively, but never effectively communicated to the board?

There has GOT to be a better way.

Measurement Theory & Risk Posts You Should Read

These came across the SIRA mailing list. They were so good, I had to share:

https://eight2late.wordpress.com/2009/07/01/cox%E2%80%99s-risk-matrix-theorem-and-its-implications-for-project-risk-management/

http://eight2late.wordpress.com/2009/12/18/visualising-content-and-context-using-issue-maps-an-example-based-on-a-discussion-of-coxs-risk-matrix-theorem/

http://eight2late.wordpress.com/2009/10/06/on-the-limitations-of-scoring-methods-for-risk-analysis/

Thanks to Kevin Riggins for finding them and pointing them out.

ISACA CRISC – A Faith-Based Initiative? Or, I Didn't Expect The Spanish Inquisition

In comments to my “Why I Don’t Like CRISC” article, Oliver writes:

CobIT allows to segregate what is called IT in analysable parts.  Different Risk models apply to those parts. e.g. Information Security, Architecture, Project management. In certain areas the risk models are more mature (Infosec / Project Management) and in certain they are not (software distribution). That is for the risk modelling (sic) part.
Oliver:  I’m very glad that others in our industry are preaching the concept of  model selection & fit.  And because you’ve demonstrated that at least you believe this is an important aspect of IRM, I’m ready to believe what you’re saying there.  But before I do so, I spent a good deal of time in Missouri, so I need you to show me:
  1. Define “mature” – what makes a mature information risk model?  In fact, show me the industry standards for gauging model maturity, so that I can examine different models, similarly.
  2. Show me, oh please show me, an information risk model that has even been tested (publicly) for repeatability and accuracy, more or less been shown to provide repeatability and accuracy to a measurable degree of confidence.
Now my thought is that you can’t have a mature risk model without having a measurable notion of repeatability (two analysts with the same data and same model go into separate rooms and come out with reasonably similar results) and accuracy (model outcomes have been tested to be correct some degree of the time).  Maybe I’m not subscribing to the right scientific journals out there, but I’ve yet to see the data sets and the published models or model maturity tests for IRM.
For risk identification and KRIs (note to readers:  I’m assuming Oliver means Key Risk Indicator – a useful but loaded phrase itself), an internal control framework which is based on cobit allows an adequate and comprehensive net of indicators for risk assessment based on operational performance.
You’re assertion is that COBIT’ is proven to be an “adequate” and “comprehensive” internal control framework.  Can you show me evidence of this?  What documentation for this has ISACA released?  How was it proven?  Where’s the study?  How did they seek to falsify COBIT’s adequacy and comprehension?  How was comprehensive measured?  At what point was it shown that more COBIT effort decidedly into the realm of diminishing returns?
If you think that “some things can’t be measured” will prove your thesis, you don’t know Risk Management at all.
I never said that, and due to the fact that I’ve taught courses based on Hubbard’s “How To Measure Anything” to risk analysts, I’m going to offer that you don’t know me well enough to come to any conclusion about my knowledge around Information Risk Management.
What I’m saying is that ISACA, COBIT, and RiskIT aren’t mature enough to certify practitioners in a meaningful manner – where “maturity” is an ability to consistently, repeatably, and accurately show a change in risk using ISACA’s own documentation.  If you can’t show me how COBIT measurably (again, where the concept of measurement requires known accuracy and repeatability – just drilling the point home, here) modifies exposure to risk or capability to manage risk in these ways, I don’t think ISACA is ready to say that we, as an industry, are more than isolated alchemists trying to find our own, individual ways to turn lead into gold.  To carry the analogy, the attestation that CRISC would provide has nothing to do with knowledge of chemistry, but everything to do with the alchemists ability to repeat a known means of trying to turn lead into gold.
There is no mathematical voodoo to model a risk exposure which is 100% correct.
We’re in agreement about modeling risk exposure.  To paraphrase Jaynes (poorly), probabilistic models are hypothesis and therefore we should expect (hope!) for them to be frequently falsified.  In addition – just to complete the picture for you, Oliver, I’m also on record as stating that arriving at a state of knowledge for capability to manage risk is similarly difficult  (and this is the whole crux of the COBIT/RISKIT/CRISC request for proof – understanding capability in a measurable way is a key dependency to understanding exposure, and therefore, ISACA is silly for trying to certify that someone can discuss exposure if they can’t even show me how COBIT reduces risk) .
You have to keep the purpose in mind and also use professional judgment based on your experience (which CRISC by the way tries to attestate)
Fascinating, so CRISC tries to provide clear evidence that an individuals experience and professional judgment is of some quality?  My whole point in this series is that any individual with experience in information risk management should know enough to know that a certification around Information Risk Analysis and management is goofy.  As for documenting an individual’s professional judgment skills, I’d love to see how the test does that in a rational manner.
You fight against an attestation which takes into full consideration your own challenge.
Nope.  Not even close.  You have no CLUE what I stand for.  I’m all for good attestation.  As I said the other day:
(…I’d argue that IRM shouldn’t be part of an MIS course load, rather it’s own tract with heavier influences from probability theory, history of science, complexity theory, economics, and epidemiology than, say, Engineering, Computer Science or MIS.)
My position is that given the difficult nature of risk analysis (as I’m saying above), there’s no way CRISC can attest to any competency around Information Risk Analysis, and if ISACA can’t show me how COBIT changes exposure or capability in a measurably way, then CRISC can’t possibly even attest to competency around Information Risk Management.  Maybe it can serve as a RiskIT test, sure and I’m fine with that.  From the same blog post as my quote above:
IRM is not (just one) “process”. Now obviously certain risk management standards (document a simple) process. In my opinion, most risk management standards are nothing BUT a re-iteration of a Plan/Do/Check/Act process. And just to be clear, I have no problems if you want to go get certified in FAIR or OCTAVE or Blahdity-Blah – I’m all for that.  That shows that you’ve studied a document and can regurgitate the contents of that document, presumably on demand, and within the specific subjective perspective of those who taught you.
And similarly if ISACA wants to “certify” that someone can take their RiskIT document and be a domain expert at it, groovy.  Just don’t call that person “Certified in Risk and Information Systems Control™” because they’re not.  They’re “Certified in our expanded P/D/C/A cycle that is yet another myopic way to list a bajillion risk scenarios in a manner you can’t possibly address before the Sun exhausts it’s supply of helium.” “TM”
I’ll state it again, if they want to change the certification’s title and meaning to simply state that an individual can do the above for RiskIT – have a day, good on you. Just don’t expect me to believe that this certification means that the individual knows anything about information risk analysis, or risk analysis in general.

CRISC -O

PREFACE:  You might interpret this blog post as being negative about risk management here, dear readers.  Don’t. This isn’t a diatrabe against IRM, only why “certification” around information risk is a really, really silly idea.
Apparently, my blog about why I don’t like the idea of CRISC has long-term stickiness.  Just today, Philip writes in the comments:
Lets be PROACTIVE instead of critical. I would love to hear about what CAN be a better job practice and skill set that is needed. I am working on both the commercial and Department of Defense and develop programs for training and coaching the skills from MBA to IT Audit and all of technical security for our Certification of Information Assurance Workforce and conduct all the CISM/CISA training and review courses for ISACA in both commercial and military environments. I have worked on Risk Management for years at ERM as well as IT Security/Risk, and A common theme in all of this is RISK MANAGEMENT. When I discuss the Value of IT with MBA students or discuss CMMI with MIS students or development houses, or discuss why ITIL/Cobit or other discuss with business managers what will keep them from reaching their goals and objectives, it is ALL risk management put into a different taxonomy that that particular audience can understand.
I have not been impressed with the current Risk Management certifications that are available. I did participate in the job task analysis of ISACA (which is a VERY positive thing about how ISACA keeps their certifications) more aligned to practice. It is also not perfect, but I think it is a start. If we contribute instead of just complain, it can get better, or we can create something better. What can be better?
So Alex I welcome a personal dialog with you or others on what and how we can do it better. I can host a web conference and invite all who want to participate (upto 100 attendee capacity).
I’ll take you up on that offer, Philip.  Unfortunately, it’s going to be a very short Webex, because the answer is simple, “you can’t do risk certification better because you shouldn’t be doing it in the first place.”
That was kind of the point of my blog posts.
Just to be clear:
In IT I’m sort of seeing 2 types of certifications:
  1. Process based certifications (I can admin a checkpoint firewall, or active directory or what not)
  2. Domain knowledge based certifications (CISA, CISM)
The problems with a risk management certification are legion.  But to highlight a few in the context of Certifying individuals:
A).  Information Risk Management is not an “applied” practice of two domains.  CISM, CISA, and similar certs are mainly, you know how to X – now apply it to InfoSec.  IRM, done with more than a casual hand wave towards following a process because you have to, is much more complex than these, requiring more than just mashing up, say, “management” and “security”, or “auditing” and “security”.
(In fact, I’d argue that IRM shouldn’t be part of an MIS course load, rather it’s own tract with heavier influences from probability theory, history of science, complexity theory, economics, and epidemiology than, say, Engineering, Computer Science or MIS.)
B).  IRM is not a “process”. Now obviously certain risk management standards are a process. In my opinion, most risk management standards are nothing BUT a re-iteration of a Plan/Do/Check/Act process. And just to be clear, I have no problems if you want to go get certified in FAIR or OCTAVE or Blahdity-Blah – I’m all for that.  That shows that you’ve studied a document and can regurgitate the contents of that document, presumably on demand, and within the specific subjective perspective of those who taught you.
And similarly if ISACA wants to “certify” that someone can take their RiskIT document and be a domain expert at it, groovy.  Just don’t call that person “Certified in Risk and Information Systems Control™because they’re not.  They’re “Certified in our expanded P/D/C/A cycle that is yet another myopic way to list a bajillion risk scenarios in a manner you can’t possibly address before the Sun exhausts it’s supply of helium.” “TM”
RE-ITERATING THE POINT
Look, as my challenge to quantify the impact of risk reduction of a COBIT program suggests, IRM is more than these standards.
And I gotta be clear here, you’ve hit a pet peeve of mine, the whole “Let’s be PROACTIVE” thing.  First, criticism and dis-proof is part of the natural evolution of ideas.  To act like it isn’t is kinda bogus.  And like I said above, you’re assuming that there is something we should be doing about individual certification instead of CRISC – but THERE ISN’T ANY ALTERNATE, AND THERE SHOULD’NT BE.  You’re saying, “let’s verify people can ride their Unicorns properly into Chernobyl” and assuming I’m saying, you know, “maybe we shouldn’t ride Unicorns”.  I’m not.  I’m saying “we shouldn’t go to Chernobyl regardless of the means of transportation”.
And in terms of what we CAN do, well in my eyes – that’s SOIRA.  Now don’t get me wrong, as best as I understood Jay’s vision, it’s not a specific destination, it’s just a destination that isn’t Chernobyl.  I don’t know where it is going yet Phil, but I’m optimistic that Kevin, Jay, John, and Chris are pretty capable of figuring it out, and doing so because of passion, not because they want to sell more memberships, course materials, or certifications.  Either way, I’m just along for the ride, interested in driving when others get tired and playing a few mix tapes along the way.

Why I'm Skeptical of "Due Diligence" Based Security

Some time back, a friend of mine said “Alex, I like the concept of Risk Management, but it’s a little like the United Nations – Good in concept, horrible in execution”.

Recently, a couple of folks have been talking about how security should just be a “diligence” function, that is, we should just prove that we’re doing best efforts and that should be enough.  Now conceptually, I love the idea that we can prove our “compliance” or diligence and get a get out of jail free card when an incident happens.  I always think it’s lame when good CISO’s get canned because they got “unlucky”.

Unfortunately, if risk management is infeasible, I’ve been thinking that the concept of Due Diligence Security is complete fantasy.  To carry the analogy, if Risk Management is the United Nations, then Due Diligence Security is the Justice League of Superfriends.  With He-Man.  And the animated Beatles from Yellow Submarine.  That live in the forrest with the Keebler elves and the Ewoks and where the glowing ghosts of Anakin, Obi-Wan and Yoda perform the “Chub-Chub” song with the glowing ghosts of John Lennon and George Harrison. That sort of fantasy.

DUE DILIGENCE BASED SECURITY IS AN ARGUMENT FROM IGNORANCE

Here’s the rub – lets say an incident happens.  Due Diligence only matters when there’s a court case, really.  And in most western courts of law these days, there’s still this concept of innocent until proven guilty.  This concept is known as the argument from ignorance in logic and it is known as a logical fallacy.

Now arguments from ignorance are known as logical fallacies thanks to the epistemological notion of falsification.  Paraphrasing Hume paraphrasing John Stuart Mill – we cannot prove “all swans are white” simply because we’ve observed all white swans –  BUT the observation of a single black swan is enough to prove that “not all swans are white”.   This matters in a court of law, as your ability to prove Due Diligence as a defendant will be a function your ability to prove all swans white – all systems compliant.  But the prosecution only has to show a single black swan to prove that you are NOT diligent.

Sir Karl Popper says, “Good luck with that, Mr. CISO”.

IT’S A TRAP!!!

The result is this – the CISO, in my humble opinion, will be in a worse condition because we have a really poor ability to control the expansion of sensitive information throughout the complex systems (network, system, people, organization) for which they are responsible.  Let me put it this way:  If information (and specifically, sensitive information) operates like a gas, automatically expanding to where it’s not controlled – then how can we possibly hope that the CISO can control the “escape” or leakage of information 100% of the time with no exceptions?  And a solitary exception in a forensic investigation becomes our black swan.

And therefore…   When it comes to proving Due Diligence in the court of law  – Security *screws* the CISO.  Big Time.

Is risk management too complicated and subtle for InfoSec?

Luther Martin, blogger with Voltage Security, has advised caution about using of risk risk management methods for information security, saying it’s “too complicated and subtle” and may lead decision-makers astray. To backup his point, he uses the example of the Two Envelopes Problem in Bayesian (subjectivist) probability, which can lead to paradoxes. Then he posed an analogous problem in information security, with the claim that probabilistic analysis would show that new security investments are unjustified. However, Luther made some mistakes in formulating the InfoSec problem and thus the lessons from Two Envelopes Problem don’t apply. Either way, a reframing into a “possible worlds” analysis resolves the paradoxes and accurately evaluates the decision alternatives for both problems. Conclusion: risk management for InfoSec is complicated and subtle, but that only means it should be done with care and with the appropriate tools, methods, and frameworks. Unsolved research problems remain, but the Two Envelopes Problem and similar are not among them.

Continue reading

The Eyes of Texas Are on Baseboard Management Controllers? WHAT??!!!

OR TEXAS HB1830S IS SWINEFLU LEGISLATION, IT’S BEEN INFECTED BY PORK!


**UPDATE:  It looks like the “vendor language” around Section Six has been struck!

Given Bejtlich’s recent promises, I thought we’d take a quick but pragmatic look at why risk assessments, even dumb, back-of-the-envelope assessments, might just be a beneficial thing.

As you probably know, the guys here at NewSchool and the guys at sister site EmergentChaos are very interested in the government regulation of cyberspace.  Oh, we also happen to be pretty good with the information risk stuff, too.  So I’m sure you wouldn’t be surprised that we spent some time looking over what one of the biggest, most influential states in the Union, Texas (Austin is also one of my most favorite places thanks to my friend Joe Visconti), is doing about legislating information security.  Currently they have a bill in consideration, HB1830S.  Highlights here:

http://www.legis.state.tx.us/tlodocs/81R/analysis/html/HB01830S.htm

HB1830S has some pretty good stuff in it.  The kind of legislation that tends to make sense, even if you are a “government hands off” kind of guy like I am.

Section 2 is about background checks and having policies and so forth.  This is wonderful, it addresses about the only control we have against Internal threat agents with significant privileges.

Section 3 seems to excuse information security information (like specific vulnerabilities) from the public record.  I’m all for some level of disclosure here (Something like the letter grades the federal government releases is fine), but really, the citizens of the state don’t need particulars.

Section 4 talks about what InfoSec information should be confidential and talks about vendor relationships.  After working on some state RFPs  (not Texas) and watching how specific requirement for a “Penetration Test” was awarded to someone who, in their RFP, specifically said that they were only going to only perform a “Vulnerability Assessment”, I appreciate these sorts of clauses.

Section 5 covers internal state reporting concerns for vuln data, great.

SECTION SIX WHAT THE !@#%^!@@#$* IS THIS???!!!

“Government Code, to require that the biennial operating plan describe the state agency’s current and proposed projects for the biennium, including how the projects will address certain matters, including using, to the fullest extent, technology owned or adapted by other state agencies, including closed loop event management technology that secures, logs, and provides audit management of baseboard management controllers and consoles of cyber assets.”

Let’s parse that and read it again:

“Government Code, to require that the biennial operating plan describe the state agency’s current and proposed projects for the biennium, including how the projects will address certain matters,…”

Looking good, it’s always nice to have a plan.

“…including using, to the fullest extent, technology owned or adapted by other state agencies,…”

Great! I’m all for sharing information among security professionals, that’s pretty much one of the fundamental pillars of the New School.

“…including closed loop event management technology that secures, logs, and provides audit management of baseboard management controllers and consoles of cyber assets.”

Wait, what?

Ok, I’ve heard of closed loop processing in Business Intelligence (A system is said to perform closed-loop processing if the system feeds information back into itself).  I’ve heard the phrase Closed-Loop in SOA.  But I’m sorry, the use of “closed loop event management technology that secures, logs, and provides audit management of baseboard management controllers” sounds like somebody lifted it from a vendor brochure.

Also, I know that this blog generally attracts some of the best and most forward thinking InfoSec readers/professionals – even if you disagree with us.  But if you need to go look up what a baseboard management controller  (BMC) is and does, to remind yourself, go right ahead.  I had to.

Now read the rest of HB1830S highlights there and put Section Six in context.

Is it just me, or does this seem like someone in Texas is trying to legislate the use of a specific vendor’s rather esoteric and specific security control?  I mean, even if BMC is really important in, say, SCADA systems – is there a reason that the dozens (?) of other agencies would have to waste their money on this?

And why legislate this specific technology?  Shouldn’t the agency security management be able to do their own risk assessments and prioritize based on the significant threats that, you know, they’re ACTUALLY SEEING?  And I’m not asking for Forests of Bayesian Belief Networks to establish risk and vulnerability information via Monte Carlo simulations here, I’m asking for a basic risk-based sanity check to make decisions, decisions based in reality, not fear.  I mean, a quick poll of Security pros on Twitter about the BMC and so far nobody has claimed to ever seen one piece of exploit code, more or less heard of an actual *incident*.  Now I’m sure that the State of Texas does a great job with Information Security and all, but I’m willing to bet good money that the BMC’s of their systems is the least of their security problems.

Bottom line, Legislating disclosure, policy, and even ensuring critical processes are in place is a useful endeavor, and the rest of HB1803 does a good job.  But legislating a specific technology is bad for a couple of reasons:

1.)  It removes management’s ability to expend resources on the actual problems they have. You are legislating without the context of risk, even poorly derived risk statements.

2.)  If it takes an act of legislature to force adoption, it will take a similar or more difficult act of politics to remove that technology when it’s outlived it’s usefulness (and one wonders if BMC securing technology would EVER be useful except in fringe cases).

Things Are Tough, Don’t Waste Taxpayer Money, Please!

HB1830S could be a good piece of legislation.  Strike the BMC aspect of Section Six and it becomes more than reasonable.  Heck, add “to the fullest extent POSSIBLE” or “to the extent that’s REASONABLE” and ask state CISO’s to provide Threat Event Metrics for the BMC if you want.  But please Texas, whatever this vendor is paying you in lobbying perks – it’s not worth the waste and hassle and the risk of derision from the parts of the Information Security community that actually happen to be concerned with public safety.

Navigation