Tag: risk management

Dear CloudTards: "Securing" The Cloud isn't the problem…

@GeorgeResse pointed out this article http://www.infoworld.com/d/cloud-computing/five-facts-every-cloud-computing-pro-should-know-174 from @DavidLinthicum today.  And from a Cloud advocate point of view I like four of the assertions.  But his point about Cloud Security is off:

“While many are pushing back on cloud computing due to security concerns, cloud computing is, in fact, as safe as or better than most on-premises systems. You must design your system with security, as well as data and application requirements in mind, then support those requirements with the right technology. You can do that in both public or private clouds, as well as traditional systems.”

In a sense, David is right, the ability to develop a relatively secure computing architecture in a cloud environment, in theory, may be reasonably similar to “traditional” computing.  But there’s two things I hate about this paragraph.  First, it seems to reflect this naive notion that systems are deployed secure until vulnerability happens. Second, it completely misses the issue facing security management.  The problems facing management re: The Cloud have nothing to do with ability to architect “secure”.  They have to do with the ability to manage risk.

A Primer About Information Security and Risk Management

Security, at its fundamental core, is not problem of poor network architecture development or poor software development practices.  Security is a problem of behaviors, those having to do with the interrelation of systems and people.  Managing risk is related, but very different in it’s nature.  Information risk management is a problem of information quality and decision making around those behaviors.  Information risk management requires:

  • Knowledge about the asset landscape – Data from what studies we do have about data breaches and successful IT operations strongly correlate visibility (even the degree of visibility) and variability in the asset landscape to success and failure in IT and IT security.
  • Knowledge about the threat landscape – types, frequency, strength, capability, and adaptability of the threat agents are among the bits of information required to know and understand risk.
  • Knowledge about the controls landscape – control information is the ability to resist threats, so not just the technical feasibility of resistance, but also the operational (human skills/resources) contributions to that ability to resist.
  • Knowledge about the impact landscape – impact information from pressures within the organization (things like response costs, downtime, and productivity losses) and from outside the organization (compliance fines, legal judgments, the consequences of IP loss, brand damage…).

In addition, there’s knowledge we synthesize when we consider one landscape in the context of another (vulnerability might be said to be the a state we develop when we consider threat, asset, and control landscape information, risk is what we  understand when we consider the information we have from all four).  In the diagram, it’s where the circles overlap.

I’m sorry if this is basic for many of you readers out there, but I thought this content was necessary – because it’s obvious cloud architect types are obsessing over the ability to build a similar technical environment without understanding the basics of managing risk.

What Really Bugs Security Managers About Cloud Computing

So the issue with moving to “the cloud” for a CISO has to do with two basic things:

  1. Information quality
  2. Responsibility (data ownership in CISSP terms).

For information quality, we are concerned with:

  • A – The ability to get reasonably similar information for the technical behavior information of our systems, and
  • B – The human behavior information from both the threat and the controls landscape.

For Responsibility, we are concerned with:

  • If the information is bad news, who is repsonsible for what actions?
  • Given threat execution (the bad news isn’t just an attack, it’s a compromise) When a data breach occurs, where will the buck ultimately stop?

For that last bullet, PCI is sort of establishing a “case law” for us already.  The lesson to take away from the experiences of others is this:  Following the suggestions of CSA documents and Cloud Audit information (excellent, necessary, and as useful as that documentation is/will be) isn’t going to be enough to manage risk in the cloud with the same quality as “traditional” architectures for many people.  And it looks like you will be left as “custodians” of the data regardless of who is paying the W-2 of the guy at the SEIM console.  More colloquially, “Crap will continue to run downhill, you’ve just diverted it a ways upstream.”

Takeaways

The objections to cloud adoption from an information risk management standpoint have nothing to do with the ability to engineer “secure”.   It is about an ability to manage risk.  There’s a significant difference there that this sort of advocacy continues to gloss over.  Of course, given how nascent information security and information risk managent are as disciplines, however, if you can transfer full responsibility to a cloud provider who is stupid enough to believe things like “we can secure your systems better than you can”, I say go for it!

The lumbering ogre of Enterprise Governance is no replacement for real Quality Management.

Gideon Rasmussen, CISSP, CISA, CISM, CIPP, writes in his latest blog post (http://www.gideonrasmussen.com/article-22.html) about the BP Oil spill and operational risk, and the damages the spill is causing BP.  Ignoring the hindsight bias of the article here…

“This oil spill is a classic example of a black swan (events with the potential for severe impact to business and a low rate of occurrence)[vi].”

No.  No it’s not.  A Black Swan is something for which our prior distributions are completely uninformative.  In this case there was plenty of prior information about Deepwater, both from a “macro-analytical” standpoint (frequency & impact of oil well accidents) and from a “micro-analytical” standpoint (there were plenty of warnings about mis-management leading right up to the spill).

Now some of you readers will be thinking “there goes Alex again, waging war against Taleb’s stupid mischaracterization of ‘black swan'” and yes, Gideon is using “black swan” when he means “tail event” – I don’t blame him for that, it’s a common error perpetuated by that awful book.  Bear with me….

That’s not my point today.  What is important is this:

We (the risk & data analysts of the world) need to be really careful about how we’re communicating to management.  Saying that Deepwater was a “Black Swan”  or more properly, a “tail event” can allow someone to think that they just got “unlucky”.  This is crap.  BP did not get unlucky, they got cheap, lazy, and sloppy.  And not just at the well, either.  If (and this is just an “if”) upper management’s tolerance for risk was NOT reflected by the singular judgement calls made to circumvent appropriate safety controls, then upper management suffered what some would call a “governance” problem (I use the term very begrudgingly here – more on that in a bit), and a significant one at that.  And since rant mode is on, let me explain that this is one thing that bugs me about IT or Op risk assessments – the impact of organizational behavior is rarely taken into account.  Take, for example, R=TxVxI (please?).  “V” is not just the weakness in the system we see, it is a cocktail of operational skills, resources, management (don’t make me say governance here, please), and yes, even “luck”.

SO the lesson here might just be that risk communication (and before you go there, IHMO COSO is self-defeating – see below) is a significant part of the risk analysis determination.  We security people focus on “upwards” communication of risk – trying to educate C-levels about the dangers they face.  But I’d bet that if an organization is incapable of communicating tolerance effectively from the top down, then they are likely to have more problems than those that don’t. There can be a time-lapse problem (Jaynesian entropy if I can use that term) between the operational happenings (what’s going on at the well) and the ability of those ultimately accountable (sr. mgmt) to detect, respond, and prevent risk issues from happening.

Even worse?  We’re keen on adding more bureaucracy to solve the communications problem in the name of “recognizing” and “managing” risk (GRC, ERM councils, Legal departments, bleh).  But in an organization the size of BP, a “GRC Dashboard” just isn’t going to solve the “micro-analytical” problems faced at Deepwater (assuming that BP executive management would have had a lower tolerance for probable incidents than the decision makers at the well).

The lumbering ogre of Enterprise Governance is no replacement for real quality management.

One can only imagine if BP had an Operational Risk Program like our standards and consultants tell us we should be operating.  What are the chances that the problems at the well would have been politically covered up, or been part of a 24 month “Enterprise Risk Assessment”, with Deepwater’s issues being one of (hundreds of?) thousands of individual risk issues documented very nicely and expensively, but never effectively communicated to the board?

There has GOT to be a better way.

ISACA CRISC – A Faith-Based Initiative? Or, I Didn't Expect The Spanish Inquisition

In comments to my “Why I Don’t Like CRISC” article, Oliver writes:

CobIT allows to segregate what is called IT in analysable parts.  Different Risk models apply to those parts. e.g. Information Security, Architecture, Project management. In certain areas the risk models are more mature (Infosec / Project Management) and in certain they are not (software distribution). That is for the risk modelling (sic) part.
Oliver:  I’m very glad that others in our industry are preaching the concept of  model selection & fit.  And because you’ve demonstrated that at least you believe this is an important aspect of IRM, I’m ready to believe what you’re saying there.  But before I do so, I spent a good deal of time in Missouri, so I need you to show me:
  1. Define “mature” – what makes a mature information risk model?  In fact, show me the industry standards for gauging model maturity, so that I can examine different models, similarly.
  2. Show me, oh please show me, an information risk model that has even been tested (publicly) for repeatability and accuracy, more or less been shown to provide repeatability and accuracy to a measurable degree of confidence.
Now my thought is that you can’t have a mature risk model without having a measurable notion of repeatability (two analysts with the same data and same model go into separate rooms and come out with reasonably similar results) and accuracy (model outcomes have been tested to be correct some degree of the time).  Maybe I’m not subscribing to the right scientific journals out there, but I’ve yet to see the data sets and the published models or model maturity tests for IRM.
For risk identification and KRIs (note to readers:  I’m assuming Oliver means Key Risk Indicator – a useful but loaded phrase itself), an internal control framework which is based on cobit allows an adequate and comprehensive net of indicators for risk assessment based on operational performance.
You’re assertion is that COBIT’ is proven to be an “adequate” and “comprehensive” internal control framework.  Can you show me evidence of this?  What documentation for this has ISACA released?  How was it proven?  Where’s the study?  How did they seek to falsify COBIT’s adequacy and comprehension?  How was comprehensive measured?  At what point was it shown that more COBIT effort decidedly into the realm of diminishing returns?
If you think that “some things can’t be measured” will prove your thesis, you don’t know Risk Management at all.
I never said that, and due to the fact that I’ve taught courses based on Hubbard’s “How To Measure Anything” to risk analysts, I’m going to offer that you don’t know me well enough to come to any conclusion about my knowledge around Information Risk Management.
What I’m saying is that ISACA, COBIT, and RiskIT aren’t mature enough to certify practitioners in a meaningful manner – where “maturity” is an ability to consistently, repeatably, and accurately show a change in risk using ISACA’s own documentation.  If you can’t show me how COBIT measurably (again, where the concept of measurement requires known accuracy and repeatability – just drilling the point home, here) modifies exposure to risk or capability to manage risk in these ways, I don’t think ISACA is ready to say that we, as an industry, are more than isolated alchemists trying to find our own, individual ways to turn lead into gold.  To carry the analogy, the attestation that CRISC would provide has nothing to do with knowledge of chemistry, but everything to do with the alchemists ability to repeat a known means of trying to turn lead into gold.
There is no mathematical voodoo to model a risk exposure which is 100% correct.
We’re in agreement about modeling risk exposure.  To paraphrase Jaynes (poorly), probabilistic models are hypothesis and therefore we should expect (hope!) for them to be frequently falsified.  In addition – just to complete the picture for you, Oliver, I’m also on record as stating that arriving at a state of knowledge for capability to manage risk is similarly difficult  (and this is the whole crux of the COBIT/RISKIT/CRISC request for proof – understanding capability in a measurable way is a key dependency to understanding exposure, and therefore, ISACA is silly for trying to certify that someone can discuss exposure if they can’t even show me how COBIT reduces risk) .
You have to keep the purpose in mind and also use professional judgment based on your experience (which CRISC by the way tries to attestate)
Fascinating, so CRISC tries to provide clear evidence that an individuals experience and professional judgment is of some quality?  My whole point in this series is that any individual with experience in information risk management should know enough to know that a certification around Information Risk Analysis and management is goofy.  As for documenting an individual’s professional judgment skills, I’d love to see how the test does that in a rational manner.
You fight against an attestation which takes into full consideration your own challenge.
Nope.  Not even close.  You have no CLUE what I stand for.  I’m all for good attestation.  As I said the other day:
(…I’d argue that IRM shouldn’t be part of an MIS course load, rather it’s own tract with heavier influences from probability theory, history of science, complexity theory, economics, and epidemiology than, say, Engineering, Computer Science or MIS.)
My position is that given the difficult nature of risk analysis (as I’m saying above), there’s no way CRISC can attest to any competency around Information Risk Analysis, and if ISACA can’t show me how COBIT changes exposure or capability in a measurably way, then CRISC can’t possibly even attest to competency around Information Risk Management.  Maybe it can serve as a RiskIT test, sure and I’m fine with that.  From the same blog post as my quote above:
IRM is not (just one) “process”. Now obviously certain risk management standards (document a simple) process. In my opinion, most risk management standards are nothing BUT a re-iteration of a Plan/Do/Check/Act process. And just to be clear, I have no problems if you want to go get certified in FAIR or OCTAVE or Blahdity-Blah – I’m all for that.  That shows that you’ve studied a document and can regurgitate the contents of that document, presumably on demand, and within the specific subjective perspective of those who taught you.
And similarly if ISACA wants to “certify” that someone can take their RiskIT document and be a domain expert at it, groovy.  Just don’t call that person “Certified in Risk and Information Systems Control™” because they’re not.  They’re “Certified in our expanded P/D/C/A cycle that is yet another myopic way to list a bajillion risk scenarios in a manner you can’t possibly address before the Sun exhausts it’s supply of helium.” “TM”
I’ll state it again, if they want to change the certification’s title and meaning to simply state that an individual can do the above for RiskIT – have a day, good on you. Just don’t expect me to believe that this certification means that the individual knows anything about information risk analysis, or risk analysis in general.

CRISC -O

PREFACE:  You might interpret this blog post as being negative about risk management here, dear readers.  Don’t. This isn’t a diatrabe against IRM, only why “certification” around information risk is a really, really silly idea.
Apparently, my blog about why I don’t like the idea of CRISC has long-term stickiness.  Just today, Philip writes in the comments:
Lets be PROACTIVE instead of critical. I would love to hear about what CAN be a better job practice and skill set that is needed. I am working on both the commercial and Department of Defense and develop programs for training and coaching the skills from MBA to IT Audit and all of technical security for our Certification of Information Assurance Workforce and conduct all the CISM/CISA training and review courses for ISACA in both commercial and military environments. I have worked on Risk Management for years at ERM as well as IT Security/Risk, and A common theme in all of this is RISK MANAGEMENT. When I discuss the Value of IT with MBA students or discuss CMMI with MIS students or development houses, or discuss why ITIL/Cobit or other discuss with business managers what will keep them from reaching their goals and objectives, it is ALL risk management put into a different taxonomy that that particular audience can understand.
I have not been impressed with the current Risk Management certifications that are available. I did participate in the job task analysis of ISACA (which is a VERY positive thing about how ISACA keeps their certifications) more aligned to practice. It is also not perfect, but I think it is a start. If we contribute instead of just complain, it can get better, or we can create something better. What can be better?
So Alex I welcome a personal dialog with you or others on what and how we can do it better. I can host a web conference and invite all who want to participate (upto 100 attendee capacity).
I’ll take you up on that offer, Philip.  Unfortunately, it’s going to be a very short Webex, because the answer is simple, “you can’t do risk certification better because you shouldn’t be doing it in the first place.”
That was kind of the point of my blog posts.
Just to be clear:
In IT I’m sort of seeing 2 types of certifications:
  1. Process based certifications (I can admin a checkpoint firewall, or active directory or what not)
  2. Domain knowledge based certifications (CISA, CISM)
The problems with a risk management certification are legion.  But to highlight a few in the context of Certifying individuals:
A).  Information Risk Management is not an “applied” practice of two domains.  CISM, CISA, and similar certs are mainly, you know how to X – now apply it to InfoSec.  IRM, done with more than a casual hand wave towards following a process because you have to, is much more complex than these, requiring more than just mashing up, say, “management” and “security”, or “auditing” and “security”.
(In fact, I’d argue that IRM shouldn’t be part of an MIS course load, rather it’s own tract with heavier influences from probability theory, history of science, complexity theory, economics, and epidemiology than, say, Engineering, Computer Science or MIS.)
B).  IRM is not a “process”. Now obviously certain risk management standards are a process. In my opinion, most risk management standards are nothing BUT a re-iteration of a Plan/Do/Check/Act process. And just to be clear, I have no problems if you want to go get certified in FAIR or OCTAVE or Blahdity-Blah – I’m all for that.  That shows that you’ve studied a document and can regurgitate the contents of that document, presumably on demand, and within the specific subjective perspective of those who taught you.
And similarly if ISACA wants to “certify” that someone can take their RiskIT document and be a domain expert at it, groovy.  Just don’t call that person “Certified in Risk and Information Systems Control™because they’re not.  They’re “Certified in our expanded P/D/C/A cycle that is yet another myopic way to list a bajillion risk scenarios in a manner you can’t possibly address before the Sun exhausts it’s supply of helium.” “TM”
RE-ITERATING THE POINT
Look, as my challenge to quantify the impact of risk reduction of a COBIT program suggests, IRM is more than these standards.
And I gotta be clear here, you’ve hit a pet peeve of mine, the whole “Let’s be PROACTIVE” thing.  First, criticism and dis-proof is part of the natural evolution of ideas.  To act like it isn’t is kinda bogus.  And like I said above, you’re assuming that there is something we should be doing about individual certification instead of CRISC – but THERE ISN’T ANY ALTERNATE, AND THERE SHOULD’NT BE.  You’re saying, “let’s verify people can ride their Unicorns properly into Chernobyl” and assuming I’m saying, you know, “maybe we shouldn’t ride Unicorns”.  I’m not.  I’m saying “we shouldn’t go to Chernobyl regardless of the means of transportation”.
And in terms of what we CAN do, well in my eyes – that’s SOIRA.  Now don’t get me wrong, as best as I understood Jay’s vision, it’s not a specific destination, it’s just a destination that isn’t Chernobyl.  I don’t know where it is going yet Phil, but I’m optimistic that Kevin, Jay, John, and Chris are pretty capable of figuring it out, and doing so because of passion, not because they want to sell more memberships, course materials, or certifications.  Either way, I’m just along for the ride, interested in driving when others get tired and playing a few mix tapes along the way.

Why I'm Skeptical of "Due Diligence" Based Security

Some time back, a friend of mine said “Alex, I like the concept of Risk Management, but it’s a little like the United Nations – Good in concept, horrible in execution”.

Recently, a couple of folks have been talking about how security should just be a “diligence” function, that is, we should just prove that we’re doing best efforts and that should be enough.  Now conceptually, I love the idea that we can prove our “compliance” or diligence and get a get out of jail free card when an incident happens.  I always think it’s lame when good CISO’s get canned because they got “unlucky”.

Unfortunately, if risk management is infeasible, I’ve been thinking that the concept of Due Diligence Security is complete fantasy.  To carry the analogy, if Risk Management is the United Nations, then Due Diligence Security is the Justice League of Superfriends.  With He-Man.  And the animated Beatles from Yellow Submarine.  That live in the forrest with the Keebler elves and the Ewoks and where the glowing ghosts of Anakin, Obi-Wan and Yoda perform the “Chub-Chub” song with the glowing ghosts of John Lennon and George Harrison. That sort of fantasy.

DUE DILIGENCE BASED SECURITY IS AN ARGUMENT FROM IGNORANCE

Here’s the rub – lets say an incident happens.  Due Diligence only matters when there’s a court case, really.  And in most western courts of law these days, there’s still this concept of innocent until proven guilty.  This concept is known as the argument from ignorance in logic and it is known as a logical fallacy.

Now arguments from ignorance are known as logical fallacies thanks to the epistemological notion of falsification.  Paraphrasing Hume paraphrasing John Stuart Mill – we cannot prove “all swans are white” simply because we’ve observed all white swans –  BUT the observation of a single black swan is enough to prove that “not all swans are white”.   This matters in a court of law, as your ability to prove Due Diligence as a defendant will be a function your ability to prove all swans white – all systems compliant.  But the prosecution only has to show a single black swan to prove that you are NOT diligent.

Sir Karl Popper says, “Good luck with that, Mr. CISO”.

IT’S A TRAP!!!

The result is this – the CISO, in my humble opinion, will be in a worse condition because we have a really poor ability to control the expansion of sensitive information throughout the complex systems (network, system, people, organization) for which they are responsible.  Let me put it this way:  If information (and specifically, sensitive information) operates like a gas, automatically expanding to where it’s not controlled – then how can we possibly hope that the CISO can control the “escape” or leakage of information 100% of the time with no exceptions?  And a solitary exception in a forensic investigation becomes our black swan.

And therefore…   When it comes to proving Due Diligence in the court of law  – Security *screws* the CISO.  Big Time.

For Blog/Twitter Conversation: Can You Defend "GRC"?

Longtime readers know that I’m not the biggest fan of GRC as it is “practiced” today.  I believe G & C are subservient to risk management. So let me offer you this statement to chew on:

“A metric for Governance is only useful inasmuch as it describes an ability to manage risk”

True or False, why, and what are the implications if true or false.

Please discuss.

#newschoolsecurity

For Those Not In The US (or even if you are)

I’d like to wish US readers a happy Thanksgiving. For those outside of the US, I thought this would be a nice little post for today: A pointer to an article in the Financial Times,

Baseball’s love of statistics is taking over football

Those who indulge my passion for analysis and for sport know that I love baseball and love how the “Moneyball” approach challenged decades of dogma in the national pastime with scientific analysis.  Today’s financial times discusses how Chelsea (“The Blues” – UK football team) collaborates with the Boston Red Sox (the most superficial bandwagon team ever in baseball) on decision making and analytics.

Go Blues

Best lines:

“Mike Forde, Chelsea’s performance director, visits the US often. “The first time I went to the Red Sox,” he says of the Boston baseball team, “I sat there for eight hours, in a room with no windows, only flipcharts. I walked out of there saying, ‘Wow, that is one of the most insightful conversations on sport I have ever had.’ It was not: ‘What are you doing here? You do not know anything about our sport.’ That was totally irrelevant. It was: ‘How do you make decisions on players? What information do you use? How do we approach the same problems?’”

and:

“Forde sees his task as “risk management”.

Huh.

Rich Mogull's Divine Assumptions

Our friend Rich Mogull has an interesting post up on his blog called “Always Assume“.  In it, he offers that “assumption” is part of a normal scenario building process, something that is fairly inescapable when making business decisions.  And he offers a simple, pragmatic process for assumptions which is mainly scenario development, justification, and action.    Rich’s process looks like this:

  1. Assumption
  2. Reasoning: The basis for the assumption.
  3. Indicators: Specific cues that indicate whether the assumption is accurate or if there’s a problem in that area.
  4. Controls: The security/recovery/safety controls to mitigate the issue.

Nothing earth shattering here.  And like much of Rich’s work, there is an elegance, almost a minimalism to what he offers.

JUST BECAUSE I CAN’T LEAVE WELL ENOUGH ALONE….

What immediately struck me was how similar Rich’s assumption was to a little something I like to call “scientific method”.  In scientific method, we essentially have (the following shamelessly pasted from Wikipedia):

So if we were to add to Rich’s assumption process above, we’d simply add the “experiments” bits up there.  If we’re building controls in like Rich’s examples in his blog post, we might try a “test” that “penetrates” those controls (or, as I believe Richard Bejtlich smartly tries to get us to say, perform “Adversary Simulation”).

Also, though it will probably sour his stomach a bit, we’d also probably want to make Rich’s assumption steps a hamster-wheel-of-pain(TM) by suggesting that since every so often, the threat landscape will change which will challenge our assumptions/conclusions/hypothesis and so re-testing is necessary.

IF I HAD ANY INDICATION…

Rich does have a certain “informality” around his evidence “indication” step that I’d like to build upon.  Let me offer that when discussing probability of failure in a complex IT system, there are only four basic categories of information indicators we need to consider in Information Assurance/Security/Risk Management/Protection/Whatever.  There might be evidences around:

  • Assets (the things we want to protect and their state)
  • Threats (the things that want to harm our assets and their state)
  • Controls (the things that resist the threats and their state)
  • Impacts (the things that will happen if we are unable to resist the threat)

And if you’re going to look for clues to suggest whether there might be a problem, look no further than these basic categories for evidence.  If you’d like, you can build structure around what “state” means for each category and further develop taxonomies and metrics and whatnot.  That’s the fun bits and I’ll let you be creative rather than write too much this morning.

Note that where these categories applied to Assumption may break down is in discussing management capabilities (are we operating well enough and so forth).  Rich’s assumptive process (must.resist.urge.to.make.acronym – RAP) can certainly be used here, I’m just not sure if there wouldn’t be a better taxonomy of indicators.

The Cost of a Near-Miss Data Breach

Near misses are very valuable signals regarding future losses. If we ignore them in our cost metrics, we might make some very poor decisions. This example shows that there is a qualitative difference between “ground truth data” (in this case, historical cash flow for data breach events) and overall security metrics, which need to reflect our estimates about the future, a.k.a. risk.

Continue reading

Navigation