What CSOs can Learn from Pete Carroll

Pete Carroll

If you listen to the security echo chamber, after an embarrassing failure like a data breach, you lose your job, right?

Let’s look at Seahawks Coach Pete Carroll, who made what the home town paper called the “Worst Play Call Ever.” With less than a minute to go in the Superbowl, and the game hanging in the balance, the Seahawks passed. It was intercepted, and…game over.

Breach Lose the Super-Bowl
Publicity News stories, letters Half of America watches the game
Headline Another data breach Worst Call Play Ever
Cost $187 per record! Tens of millions in sponsorship
Public response Guessing, not analysis Monday morning quarterbacking*
Outcome CSO loses job Pete Caroll remains employed

So what can the CSO learn from Pete Carroll?

First and foremost, have back to back winning seasons. Since you don’t have seasons, you’ll need to do something else that builds executive confidence in your decision making. (Nothing builds confidence like success.)

Second, you don’t need perfect success, you need successful prediction and follow-through. Gunnar Peterson has a great post about the security VP winning battles. As you start winning battles, you also need to predict what will happen. “My team will find 5-10 really important issues, and fixing them pre-ship will save us a mountain of technical debt and emergency events.” Pete Carroll had that—a system that worked.

Executives know that stuff happens. The best laid plans…no plan ever survives contact with the enemy. But if you routinely say things like “one vuln, and it’s over, we’re pwned!” or “a breach here could sink the company!” you lose any credibility you might have. Real execs expect problems to materialize.

Lastly, after what had to be an excruciating call, he took the conversation to next year, to building the team’s confidence, and not dwelling on the past.

What Pete Carroll has is a record of delivering what executives wanted, rather than delivering excuses, hyperbole, or jargon. Do you have that record?

* Admittedly, it started about 5 seconds after the play, and come on, how could I pass that up? (Ahem)
† I’m aware of the gotcha risk here. I wrote this the day after Sony Pictures Chairman Amy Pascal was shuffled off to a new studio.

Your career is over after a breach? Another Myth, Busted!

I’m a big fan of learning from our experiences around breaches. Claims like “your stock will fall”, or “your customers will flee” are shown to be false by statistical analysis, and I expect we’d see the same if we looked at people losing their jobs over breaches. (We could do this, for example, via LinkedIn and DatalossDB.)

There’s another myth that’s out there about what happens after a breach, and that is that the breach destroys the career of the CISO and the entire security department. And so I’m pleased today to be able to talk about that myth. Frequently, when I bring up breaches and lessons we can learn, people bring up ChoicePoint as the ultimate counterexample. Now, ChoicePoint is interesting for all sorts of reasons, but from a stock price perspective, they’re a statistical outlier. And so I’m extra pleased to be able to discuss today’s lesson with ChoicePoint as our data point.

Last week, former ChoicePoint CISO Rich Baich was [named Wells Fargo’s] first chief information security officer. Congratulations, Rich!

Now, you might accuse me of substituting anecdote for data and analysis, and you’d be sort of right. One data point doesn’t plot a line. But not all science requires plotting a line. Oftentimes, a good experiment shows us things by being impossible under the standard model. Dropping things from the tower of Pisa shows that objects fall at the same speed, regardless of weight.

So Wells Fargo’s announcement is interesting because it provides a data point that invalidates the hypothesis “If you have a breach, your career is over.” Now, some people, less clever than you, dear reader, might try to retreat to a weaker claim “If you have a breach, your career may be over.” Of course, that “may” destroys any predictive value that the claim may have, and in fact, the claim “If [X], your career may be over,” is equally true, and equally useless, and that’s why you’re not going there.

In other words, if a breach always destroys a career, shouldn’t Rich be flipping burgers?

There’s three more variant hypotheses we can talk about:

  • “If you have a breach, your career will require a long period of rehabilitation.” But Rich was leading the “Global Cyber Threat and Vulnerability Management practice” for Deloitte and Touche, which is not exactly a backwater.
  • “If you have a breach, you will be fired for it.” That one is a bit trickier. I’m certainly not going to assert that no one has ever been fired for a breach happening. But it’s also clearly untrue. The weaker version is “if you have a breach, you may be fired for it”, and again, that’s not useful or interesting.
  • “If you have a breach, it will improve your career.” That’s also obviously false, and the weaker version isn’t faslifiable. But perhaps the lessons learned, focus, and publicity around a breach can make it helpful to your career. It’s not obviously dumber than the opposite claims.

So overall, what is useful and interesting is that yet another myth around breaches turns out to be false. So let’s start saying a bit more about what went wrong, and learning more about what’s going wrong.

Finally, again, congratulations and good luck to Rich in his new role!

Top 5 Security Influencers of 2011

I really like Gunnar Peterson’s post on “Top 5 Security Influencers:”

Its December and so its the season for lists. Here is my list of Top 5 Security Influencers, this is the list with the people who have the biggest (good and/or bad) influence on your company and user’s security:

My list is slightly different:

  1. The Person Coding Your App
  2. Your DBA
  3. Your Testers
  4. Your Ops team
  5. The person with the data
  6. Uma Thurman
  7. You

That’s right, without data to argue an effective case for investing in security, you have less influence than Uma Thurman. And even if you have more influence than her, if you want to be in the top 5, you better be the person bringing the data.

As long as we’re hiding everything that might allow us to judge comparative effectiveness, we’re going to continue making no progress.

Ahh, but which Uma?
265446 1020 A
Update: Chris Hoff asks “But WHICH Uma? Kill Bill Uma or Pulp Fiction Uma?” and sadly, I have to answer: The Truth About Cats and Dogs Uma. You remember. Silly romantic comedy where guy falls in love with radio veterinarian Janeane Garofalo, who’s embarrassed about her looks? And Uma plays her gorgeous but vapid neighbor? That’s the Uma with the more influence than you. The one who spends time trying to not be bubbly when her audition for a newscaster job leads off with “hundreds of people feared dead in a nuclear accident?” Yeah. That Uma. Because at least she’s nice to look at while going on about stuff no one cares about. But you know? If you show up with some chops and some useful data to back your claims, you can do better than that.

On the downside, you’re unlikely to ever be as influential as Kill Bill Uma. Because, you know, she has a sword, and a demonstrated willingness to slice the heads off of people who argue with her, and a don’t-care attitude about jail. It’s hard to top that for short term influence. Just ask the 3rd guy trying to code your app, and hoping it doesn’t crash. He’s got eyes for no one not carrying that sword.

The Diginotar Tautology Club

I often say that breaches don’t drive companies out of business. Some people are asking me to eat crow because Vasco is closing its subsidiary Diginotar after the subsidiary was severely breached, failed to notify their reliant parties, mislead people when they did, and then allowed perhaps hundreds of thousands of people to fall victim to a man in the middle attack. I think Diginotar was an exception that proves the rule.

Statements about Diginotar going out of business are tautological. They take a single incident, selected because the company is going out of business, and then generalize from that. Unfortunately, Diginotar is the only CA that has gone out of business, and so anyone generalizing from it is starting from a set of businesses that have gone out of business. If you’d like to make a case for taking lessons from Diginotar, you must address why Comodo hasn’t gone belly up after *4* breaches (as counted by Moxie in his BlackHat talk).

It would probably also be helpful to comment on how Diginotar’s revenue rate of 200,000 euros in 2011 might contribute to it’s corporate parent deciding that damage control is the most economical choice, and what lessons other businesses can take.

To be entirely fair, I don’t know that Diginotar’s costs were above 200,000 euros per year, but a quick LinkedIn search shows 31 results, most of whom have not yet updated their profiles.

So take what lessons you want from Diginotar, but please don’t generalize from a set of one.

I’ll suggest that “be profitable” is an excellent generalization from businesses that have been breached and survived.

[Update: @cryptoki pointed me to the acquisition press release, which indicates 45 employees, and “DigiNotar reported revenues of approximately Euro 4.5 million for the year 2009. We do not expect 2010 to be materially different from 2009. DigiNotar’s audited statutory report for 2009 showed an operating loss of approximately Euro 280.000, but an after-tax profit of approximately Euro 380.000.” I don’t know how to reconcile the press statements of January 11th and August 30th.]

15 Years of Software Security: Looking Back and Looking Forward

Fifteen years ago, I posted a copy of “Source Code Review Guidelines” to the web. I’d created them for a large bank, because at the time, there was no single document on writing or reviewing for security that was broadly available. (This was a about four years before Michael Howard and Dave LeBlanc published Writing Secure Code or Gary McGraw and John Viega published Building Secure Software.)

So I assembled what we knew, and shared it to get feedback and help others. In looking back, the document describes what we can now recognize as an early approach to security development lifecycles, covering design, development, testing and deployment. It even contains a link to the first paper on fuzzing!

Over the past fifteen years, I’ve been involved in software security as a consultant, as the CTO of Reflective, a startup that delivered software security as a service, and as a member of Microsoft’s Security Development Lifecycle team where I focused on improving the way people threat model. I’m now working on usable security and how we integrate it into large-scale software development.

So after 15 years, I wanted to look forward a little at what we’ve learned and deployed, and what the next 15 years might bring. I should be clear that (as always) these are my personal opinions, not those of my employer.

Looking Back

Filling the Buffer for Fun and Profit
I released my guidelines 4 days before Phrack 49 came out with a short article called “Smashing The Stack For Fun And Profit.” Stack smashing wasn’t new. It had been described clearly in 1972 by John P. Anderson in the “Computer Security Technology Planning Study,” and publicly and dramatically demonstrated by the 1988 Morris Worm’s exploitation of fingerd. But Aleph1’s article made the technique accessible and understandable. The last 15 years have been dominated by important bugs which share the characteristics of being easily demonstrated as “undesired functionality” and being relatively easy to fix, as nothing should really depend on them.

The vuln and patch cycle
As a side effect of easily demonstrated memory corruption, we became accustomed to a cycle of proof-of-concept, sometimes a media cycle and usually a vendor response that fixed the issue. Early on, vendors ignored the bug reports or threatened vulnerability finders (who sometimes sounded like they were trying to blackmail vendors) and so we developed a culture of full disclosure, where researchers just made their discoveries available to the public. Some vendors set up processes for accepting security bug reports, with a few now offering money for such vulnerabilities, and we have a range of ways to describe various approaches to disclosure. Along the way, we created the CVE to help us talk about these vulnerabilities.

In some recent work, we discovered that the phrase “CVE-style vulnerability” was a clear descriptor that cut through a lot of discussion about what was meant by “vulnerability.” The need for terms to describe types of disclosure and vulnerabilities is an interesting window into how often we talk about it.

The industrialization of security
One effect of memory corruption vulnerabilities was that it was easy to see that the unspecified functionality were bugs. Those bugs were things that developers would fix. There’s a longstanding, formalist perspective that “A program that has not been specified cannot be
incorrect; it can only be surprising.” (“Proving a Computer System Secure“) That “formalist” perspective held us back from fixing a great many security issues. Sometimes the right behavior was hard to specify in advance. Good specifications are always tremendously expensive (although thats sometimes still cheaper than not having them.) When we started calling those things bugs, we started to fix them. And when we started to fix bugs, we got people interested in practical ways to reduce the number of those bugs. We had to organize our approaches, and discover which ones worked. Microsoft started sharing lots of its experience before I joined up, and that’s helped a great many organizations get started doing software security “at scale.”

Another aspect of the industrialization of security is the massive growth of security conferences. There are again, many types. There are hacker cons, there are vulnerability researcher cons, and there’s industry events like RSA. There’s also a rise in academic conferences. All of these (except BlackHat-style conferences) existed in 1996, but their growth has been spectacular.

Looking forward in software security

Memory corruption
The first thing that I expect will change is our focus on memory corruption vulnerabilities. We’re getting better at finding these early in the development with weakly typed languages, and better at building platforms with randomization built in to make the remainder harder to exploit. We’ll see a resurgence of command injection, design flaws and a set of things that I’m starting to think about as feature abuse. That includes things like Autorun, Javascript in PDFs (and heck, maybe Javascript in web pages), and also things like spam.

Human factors
Human factors in security will become even more obviously important, as more and more decisions will be required of the person because the computer just doesn’t know. Making good decisions is hard, and most of the the people we’ll ask to make decisions are not experts and reasonably prefer to just get their job done. We’re starting to see patterns like the “gold bars” and advice like “NEAT.” I expect we’ll learn a lot about how attacks work, how to build defenses, and coalesce around a set of reasonable expectations of someone using a computer. Those expectations will be slimmer than security experts will prefer, but good science and data will help make reasonable expectations clear.

Embedded systems
As software gets embedded in everything, so will flaws. Embedded systems will come with embedded flaws. The problems will hit not just Apple or Andriod, but cars, centrifuges, medical devices, and everything with code in it. Which will be a good approximation of everything. One thing we’ve seen is that applying modern vulnerability finding techniques to software released without any security testing is like kicking puppies. They’re just not ready for it. Actually, that’s a little unfair. It’s more like throwing puppies into a tank full of laser-equipped sharks. Most things will not have update mechanisms for a while, and when they do, updates will increasingly a battleground.

Patch Trouble
Apple already forces you to upgrade to the “latest and greatest,” and agree to the new EULA, before you get a security update. DRM schemes will block access to content if you haven’t updated. The pressure to accept updates will be intense. Consumer protection issues will start to come up, and things like the Etisalat update for Blackberry will become more common. These combined updates will impact on people’s willingness to accept updates and close windows of software vulnerability.

EULA wars will heat up as bad guys get users to click through contracts forbidding them from removing the software. Those bad guys will include actual malware distributors, middle Eastern telecoms companies, and a lot of organizations that fall into a grey area.

the interplay between privacy and security will get a lot more complex and nuanced as our public discourse gets less so. Our software will increasingly be able to extract all sorts of data but also to act on our behalfs in all sorts of ways. Compromised software will scribble racist comments on your Facebook wall, and companies like Social Intelligence will store those comments for you to justify ever-after.

Careers in software security will become increasingly diverse. It’s already possible to specialize in fuzzing, in static or dynamic analysis, in threat modeling, in security testing, training, etc. We’ll see lots more emerge over the next fifteen years.

Things we won’t see

We won’t see substantially better languages make the problem go away. We may move it around, and we may eliminate some of it, but PHP is the new C because it’s easy to write quick and dirty code. We’ll have cool new languages with nifty features, running on top of resilient platforms. Clever attackers will continue to find ways to make things behave unexpectedly.

A lack of interesting controversies.

(Not) looking into the abyss

There’s a couple of issues I’m not touching at all. They include cloud, because I don’t know what I want to say, and cyberwar, because I know what I don’t want to say. I expect both to be interesting.

Your thoughts?

So, that’s what I think I’ve seen, and that’s what I think I see coming. Did I miss important stories along the way? Are there important trends that will matter that I missed?

[Editor’s note: Updated to clarify the kicking puppy analogy.]

Communicating with Executives for more than Lulz

On Friday, I ranted a bit about “Are Lulz our best practice?” The biggest pushback I heard was that management doesn’t listen, or doesn’t make decisions in the best interests of the company. I think there’s a lot going on there, and want to unpack it.

First, a quick model of getting executives to do what you want. I’ll grossly oversimplify to 3 ordered parts.

  1. You need a goal. Some decision you think is in the best interests of the organization, and reasons you think that’s the case.
  2. You need a way to communicate about the goal and the supporting facts and arguments.
  3. You need management who will make decisions in the best interests of the organization.

The essence of my argument on Friday is that 1 & 2 are often missing or under-supported. Either the decisions are too expensive given the normal (not outlying) costs of a breach, or the communication is not convincing.

I don’t dispute that there are executives who make decisions in a way that’s intended to enrich themselves at the expense of shareholders, that many organizations do a poor job setting incentives for their executives, or that there are foolish executives who make bad decisions. But that’s a flaw in step 3, and to worry about it, we have to first succeed in 1 & 2. If you work for an organization with bad executives, you can (essentially) either get used to it or quit. For everyone in information security, bad executives are the folks above you, and you’re unlikely to change them. (This is an intentional over-simplification to let me get to the real point. Don’t get all tied in knots? k’thanks.)

Let me expand on insufficient facts, the best interests of the organization and insufficient communication.

Sufficient facts mean that you have the data you need to convince an impartial or even a somewhat partial world that there’s a risk tradeoff worth making. That if you invest in A over B, the expected cost to the organization will fall. And if B is an investment in raising revenues, then the odds that A happens are sufficiently higher than B that it’s worth taking the risk of not raising revenue and accepting the loss from A. Insufficient facts is a description of what happens because we keep most security problems secret. In happens in several ways, prominent amongst them is that we can’t really do a good job at calculating probability or losses, and that we have distorted views of those probabilities or losses.

Now, one commenter, “hrbrmstr” said: “I can’t tell you how much a certain security executive may have tried to communicate the real threat actor profile (including likelihood & frequency of threat action)…” And I’ll say, I’m really curious how anyone is calculating frequency of threat action. What’s the numerator and denominator in the calculation? I ask not because it’s impossible (although it may be quite hard in a blog comment) but because the “right” values to use for those is subject to discussion and interpretation. Is it all companies in a given country? All companies in a sector? All attacks? Including port-scans? Do you have solid reasons to believe something is really in the best interests of the organization? Do they stand up to cross-examination? (Incidentally, this is a short form of an argument that we make in chapter 4 of the New School of Information Security, which is the book which inspired this blog.)

I’m not saying that hrbrmstr has the right facts or not. I’m saying that it’s important to have them, and to be able to communicate about why they’re the right facts. That communication must include listening to objections that they’re not the right ones, and addressing those. (Again, assuming a certain level of competence in management. See above about accept or quit.)

Shifting to insufficient communication, this is what I meant by the lulzy statement “We’re being out-communicated by people who can’t spell.” Communication is a two-way street. It involves (amongst many other things) formulating arguments that are designed to be understood, and actively listening to objections and questions raised.

Another commenter, “Hmmm” said, “I’ve seen instances where a breach occurred, the cause was identified, a workable solution proposed and OK’d… and months or years later a simple configuration change to fix the issue is still not on the implementation schedule.”

There are two ways I can interpret this. The first is that “Hmmm’s” idea of simple isn’t really simple (insofar as it breaks something else). Perhaps fixing the breach is as cheap and easy as fixing the configurations, but there are other, higher impact things on the configuration management todo list. I don’t know how long that implementation schedule is, nor how long he’s been waiting. And perhaps his management went to clown school, not MBA school. I have no way to tell.

What I do know is that often the security professionals I’ve worked with don’t engage in active listening. They believe their path is the right one, and when issues like competing activities in configuration management are brought up, they dismiss the issue and the person who raised it. And you might be right to do so. But does it help you achieve your goal?

Feel free to call me a management apologist, if that’s easier than learning how to get stuff done in your organization.

Would a CISO benefit from an MBA education?

If a CISO is expected to be an executive officer (esp. for a large, complex technology- or information-centered organization), then he/she will need the MBA-level knowledge and skill. MBA is one path to getting those skills, at least if you are thoughtful and selective about the school you choose. Other paths are available, so it’s not just about an MBA credential.

Otherwise, if a CISO is essentially the Most Senior Information Security Manager, then MBA education wouldn’t be of much value.

This question was introduced recently in an article by Upasana Gupta: Should a CISO Have an MBA? She asked four CISO’s their opinion, and three essentially said “no”, while one said “yes”.  Eric, at Security, Cigars, and FUD blog, posted his opinion here and here.  Basically, he said “no, it’s not necessary as a credential, but some business knowledge might be helpful”.   The opinions offered on Twitter were almost universally “no”.

As a business guy, I was somewhat surprised that much of the discussion and opinions centered on MBA as a credential rather than what knowledge or skills someone would learn in an MBA program.  None of us at New School are a fan of credentials as such, so my interest in this question is on the educational value compared to alternative investments in education

Also following the New School philosophy, I thought I would look for data and evidence rather than just offering my opinion.

To my delight, I found a fairly comprehensive study: THE CHIEF INFORMATION SECURITY OFFICER: AN ANALYSIS OF THE SKILLS REQUIRED FOR SUCCESS, by Dwayne Whitten of Texas A&M University . The paper is worth reading because it gives a good overview of the conflicting values and forces that are affecting CISO hiring, evaluation, and effectiveness.

Specifically, he finds a gap between how CISOs define success and the job duty descriptions. Quoting from his conclusion:

Based on a thorough review of the literature, interviews with security executives, and an analysis of job listings, a comprehensive list of duties and background/experience requirements were found related to the CISO position (see Table 3). The most interesting issue that arose from this research is that business strategy did not make the list of most included job duties. Given the high level of importance given to this by the literature and the executives, it is surprising that it was not listed on the job listings surveyed. Thus, it appears that many of the organizations searching for new CISOs during the research period did not fully understand the importance of including the CISO in the business strategy formulation.  [emphasis added]

This dichotomy seems to relate to how CISOs are viewed.  From one point of view, CISO is equivalent to “Most Senior Information Security Manager”.  That is, they contribute to the organization in exactly the same way as do other information security managers, but only on larger scope.  It is this perspective that is most closely aligned with the opinion that an MBA education would not be helpful.  Instead, it would be more valuable to get deeper education in InfoSec technical aspects — engineering, forensics, incident response — plus regulations, compliance, etc.

Another point of view is that a CISO is an executive officer of the organization, and thus has fiduciary duties to stakeholders regarding the organization’s overall performance, and also has teamwork responsibilities with the other executive officers regarding crucial strategic decisions.

Maybe this is rare in practice, and maybe the “Chief Information Security Officer” title is just another example rampant job title inflation.  But if a CISO in some organizations are expected to perform in this role, then it is not the case that they are not “just another information security manager, only  bigger”.  Their job is qualitatively different and the knowledge gained at a good quality B-school might be just what they need.

To respond to Eric, who said “And I’ve yet to see a course on security risk management in traditional MBA programs”, I offer two examples: 1) James Madison University offers an MBA in Information Security.  2) Worcester Polytechnic Institute offers an MBA concentration in Information Security Management. The WPI MBA course catalog list quite a few that would be directly valuable to a CISO (e.g. “INFORMATION SECURITY MANAGEMENT”, “OPERATIONS RISK MANAGEMENT”, and “E-BUSINESS APPLICATIONS”), plus many that would be indirectly valuable (statistics, change management, negotiations).   (Disclosure: I got my undergraduate degree from WPI.  Their MBA program is very good, esp. for technical managers.)

I’ll close with a comprehension test for CISOs.  Read this workshop report: Embedding Information Security Risk Management into the Extended Enterprise.  It’s the output of 18 CISO discussing the most challenging issues facing them regarding information security across their enterprise and across their supply chain.

I think you’ll see that most of the problems involve analysis and methods go well beyond the typical education and experience of information security managers.  Instead, they require knowledge and skills that are more typically covered in MBA programs — business strategy, economics, finance, organization behavior and change management, organization performance management and incentives, plus business law and public policy.

Conclusion: if a CISO is expected to be an executive officer (esp. for a large, complex technology- or information-centered organization), then he/she will need the knowledge and skill exemplified by the comprehension exercise, above.  MBA is one path to getting those skills, at least if you are thoughtful and selective about the school you choose.  Other paths are available, so it’s not just about an MBA credential.

Otherwise, if a CISO is essentially the Most Senior Information Security Manager, then MBA education wouldn’t be of much value.

A Letter from Sid CRISC – ious

In the comments to “Why I Don’t Like CRISC” where I challenge ISACA to show us in valid scale and in publicly available models, the risk reduction of COBIT adoption, reader Sid starts to get it, but then kinda devolves into a defense of COBIT or something.  But it’s a great comment, and I wanted to address his comments and clarify my position a bit.  Sid writes:


Just imagine (or try at your own risk) this –

Step 1. Carry out risk assessment
Step 2. In your organisation, boycott all COBiT recommendations / requirements for 3-6 months
Step 3. Carry out risk assessment again

Do you see increase in risk? If Yes, then you will agree that adoption of Cobit has reduced the risk for you so far.

You might argue that its ‘a Control’ that ultimately reduces risk & not Cobit.. however I sincerely feel that ‘effectiveness’ of the control can be greatly improved by adopting cobit governance framework & Improvement of controls can be translated into reduced risk.

I can go on writting about how cobit also governs your risk universe, but I am sure you are experienced enough to understand these overlapping concepts without getting much confused.

Nice try, Sid!  However, remember my beef is that Information Risk Management isn’t mature enough.  Thus I’ve asked for “valid scales” (i.e. not multiplication or division using ordinal values) and publicly available models (because the state of our public models best mirrors the maturity of the overall population of risk analysts).

And that’s my point, even if I *give* you the fact that we can make proper point predictions for a complex adaptive system (which I would argue we can’t, thus nullifying every IT risk approach I’ve ever seen), there isn’t a publicly available model that can do Steps One and Three in a defensible manner.  Yet ISACA seems hell-bent on pushing forth some sort of certification (money talks?).  This despite the inability of our industry to even use the correct numerical scales in risk assessment, more or less actually performing risk assessment in a means that can be even used to govern on a strategic level, or even showing an ability to identify key determinants in a population.

Seriously, if you can’t put two analysts with the same information in two separate rooms and have them arrive at the same conclusions given the same data – how can you possibly “certify” anything other than “this person is smart enough to know there isn’t an answer”?


I want to make one thing clear.  My beef isn’t with ISACA, it’s not with COBIT, it’s not with audit.  I think all three of these things are awesome to some degree for some reasons.  And especially, Sid, my beef isn’t COBIT – I’m a big process weenie these days because the data we do have (See Visible Ops for Security) suggests that maturity is  a risk reducing determinant.  However, this is like a doctor telling a fat person that they should exercise based on vague association with a published study of some bias.  How much, what kind, and absolute effectiveness compared to existing lifestyle is (and esp. how to change lifestyle if that is a cause) is still very much a guess.  It’s an expert (if I can call myself an expert) opinion, not a scientific fact.

In the same way your assertion about COBIT fails reasoned scrutiny.  First, there is the element of “luck”.  In what data we do have, we know that while there is a pretty even spread in frequency of representation in data breaches between determined and random attackers.  That latter aspect means that it’s entirely likely that we could dump COBIT and NOT see an increase in outcomes (whether this is an increase in risk is another philosophical argument for another day).

Second, maybe it’s my “lack of experience” but I will admit that I am very confused these days as to a proper definition of IT Security Governance.  Here’s why; there are many definitions (formal, informal) I’ve read about what ITSec G is.  If you argue that it is simply the assignment of responsibility, that’s fine.  If you want to call it a means to mature an organization to reduce risk (as you do above), then we have to apply proper scrutiny towards maturity models, and how the outcomes of those models influence risk assessment outcomes (the wonderful part of your comment there is the absolute recognition of this).  If you want to call it a means to maturity or if ITSec G is an enumeration of the actual processes that “must” be done, then we get to ask “why”.  And once that happens, well, I’m from Missouri – you have to show me.  And then we’re back into risk modeling, which, of course, we’re simply very immature at.

Any way I look at it, Sid, I can’t see how we’re ready for a certification around Information Risk Management.

Side Note: My problem with IT Security Governance is this: If at any point there needs to be some measuring and modeling done to create justification of codified IT Security Governance, then the Governance documentation is really just a model that says “here’s how the world should work” and as a model requires measuring, falsification, and comparative analytics. In other words, it’s just management.  In this case, the management of IT risk, which sounds like a nice way of saying “risk management”.

Don't fight the zeitgeist, CRISC Edition

Some guy recently posted a strangely self-defeating link/troll/flame in an attempt to (I think) argue with Alex and/or myself regarding the relevance or lack thereof of ISACA’s CRISC certification.  Now given that I think he might have been doing it to drive traffic to his CRISC training site, I won’t show him any link love (although I’m guessing he’ll show up in comments and save me the effort).  Still, he called my Dad (“Mr. Howell”) out by name, which is a bit cheeky seeing as how my Dad left the mortal coil some time ago, so I’ll respond on Dear ol’ Dad’s behalf.

Now the funny thing about that is that I had pretty much forgotten all about CRISC, even though we’ve had a lot of fun with it here at the New School and made what I thought were some very good points about the current lack of maturity in Risk Management and why the last thing we need is another process-centric certification passing itself off as expertise.

I went back and re-read the original articles, and I think that they are still spot-on, so I decided that I would instead take another look at CRISC-The-Popularity-Contest and see who has turned out to be right in terms of CRISC’s relevance now that it’s been nine months almost to the day since ISACA announced it.

Quick, dear readers, to the GoogleCave!

Hmm…CRISC isn’t doing so well in the Long Run.  That’s a zero (0) for the big yellow crisc.

Of course, in the Long Run, we’re all dead, so maybe I should focus on a shorter time frame. Also, I see that either Crisco’s marketing team only works in the fall or most people only bake around the holidays.  If you had asked me, I would not have predicted that Crisco had a strong seasonality to it, so take whatever I say here with a bit of shortening.

Let’s try again, this time limiting ourselves to the past 12 months.

Nope…still nothing, although the decline of the CISSP seems to have flattened out a bit.  Also, we can definitely now see the spike in Crisco searches correlating to Thanksgiving and Christmas.  Looks like people don’t bake for Halloween (Too bad.  Pumpkin bread is yummy) and probably don’t bake well for Thanksgiving and Christmas if they have to google about Crisco.

Oh, well.  Sorry, CRISC.

Now, if you’ll excuse me, I have a cake to bake.

P.S.  Yes, I’m aware my screenshots overflow the right margin.  No, I’m not going to fix it.