Building an Application Security Team

The Application Security Engineer role is in demand nowadays. Many job offers are available, but actual candidates are scarce. Why is that? It’s not an easy task as the level of skills needed is both to be broad and specialized at the same time. Most of the offers are about one person, one unicorn that does all those wonderful things to ensure that the organization is making secure software.

 

Looking at where we are coming from

For the sake of simplicity, let’s say that the application security industry is traditionally split between two main types of offering: those who are selling products and those who are selling pen test services. In many occasions both are from the same vendor, but from different teams internally. Let’s break down how they are judged on their success to have an idea how it managed to evolve.

Continue reading “Building an Application Security Team”

It’s Not The Crime, It’s The Coverup or the Chaos

Well, Richard Smith has “resigned” from Equifax.

The CEO being fired is a rare outcome of a breach, and so I want to discuss what’s going on and put it into context, which includes the failures at DHS, and Deloitte breach. Also, I aim to follow the advice to praise specifically and criticize in general, and break that pattern here because we can learn so much from the specifics of the cases, and in so learning, do better.

Smith was not fired because of the breach. Breaches happen. Executives know this. Boards know this. The breach is outside of their control. Smith was fired because of the post-breach chaos. Systems that didn’t work. Tweeting links to a scam site for two weeks. PINS that were recoverable. Weeks of systems saying “you may have been a victim.” Headlines like “Why the Equifax Breach Stings So Bad” in the NYTimes. Smith was fired in part because of the post-breach chaos, which was something he was supposed to control.

But it wasn’t just the chaos. It was that Equifax displayed so much self-centeredness after the breach. They had the chutzpah to offer up their own product as a remedy. And that self-dealing comes from seeing itself as a victim. From failing to understand how the breach will be seen in the rest of the world. And that’s a very similar motive to the one that leads to coverups.

In The New School Andrew and I discussed how fear of firing was one reason that companies don’t disclose breaches. We also discussed how, once you agree that “security issues” are things which should remain secret or shared with a small group, you can spend all your energy on rules for information sharing, and have no energy left for actual information sharing.

And I think that’s the root cause of “U.S. Tells 21 States That Hackers Targeted Their Voting Systems” a full year after finding out:

The notification came roughly a year after officials with the United States Department of Homeland Security first said states were targeted by hacking efforts possibly connected to Russia.

A year.

A year.

A year after states were first targeted. A year in which “Obama personally warned Mark Zuckerberg to take the threats of fake news ‘seriously.’” (Of course, the two issues may not have been provably linkable at the time.) But. A year.

I do not know what the people responsible for getting that message to the states were doing during that time, but we have every reason to believe that it probably had to do with (and here, I am using not my sarcastic font, but my scornful one) “rules of engagement,” “traffic light protocols,” “sources and methods” and other things which are at odds with addressing the issue. (End scornful font.) I understand the need for these things. I understand protecting sources is a key role of an intelligence service which wants to recruit more sources. And I also believe that there’s a time to risk those things. Or we might end up with a President who has more harsh words for Australia than the Philippines. More time for Russia than Germany.

In part, we have such a President because we value secrecy over disclosure. We accept these delays and view them as reasonable. Of course, the election didn’t turn entirely on these issues, but on our electoral college system, which I discussed at some length, including ways to fix it.

All of which brings me to the Deloitte breach, “Deloitte hit by cyber-attack revealing clients’ secret emails.” Deloitte, along with the others who make up the big four audit firms, gets access to its clients deepest secrets, and so you might expect that the response to the breach would be similar levels of outrage. And I suspect a lot of partners are making a lot of hat-in-hand visits to boardrooms, and contritely trying to answer questions like “what the flock were you people doing?” and “why the flock weren’t we told?” I expect that there’s going to be some very small bonuses this year. But, unlike our relationship with Equifax, boards do not feel powerless in relation to their auditors. They can pick and swap. Boards do not feel that the system is opaque and unfair. (They sometimes feel that the rules are unfair, but that’s a different failing.) The extended reporting time will likely be attributed to the deep analysis that Deloitte did so it could bring facts to its customers, and that might even be reasonable. After all, a breach is tolerable; chaos afterwards may not be.

The two biggest predictors of public outrage are chaos and coverups. No, that’s not quite right. The biggest causes are chaos and coverups. (Those intersect poorly with data brokerages, but are not limited to them.) And both are avoidable.

So what should you do to avoid them? There’s important work in preparing for a breach, and in preventing one.

  • First, run tabletop response exercises to understand what you’d do in various breach scenarios. Then re-run those scenarios with the principals (CEO, General Counsel) so they can practice, too.
  • To reduce the odds of a breach, realize that you need continuous and integrated security as part of your operational cycles. Move from focusing on pen tests, red teams and bug bounties to a focus on threat modeling, so you can find problems systematically and early.

I’d love to hear what other steps you think organizations often miss out on.

What CSOs can Learn from Pete Carroll

Pete Carroll

If you listen to the security echo chamber, after an embarrassing failure like a data breach, you lose your job, right?

Let’s look at Seahawks Coach Pete Carroll, who made what the home town paper called the “Worst Play Call Ever.” With less than a minute to go in the Superbowl, and the game hanging in the balance, the Seahawks passed. It was intercepted, and…game over.

Breach Lose the Super-Bowl
Publicity News stories, letters Half of America watches the game
Headline Another data breach Worst Call Play Ever
Cost $187 per record! Tens of millions in sponsorship
Public response Guessing, not analysis Monday morning quarterbacking*
Outcome CSO loses job Pete Caroll remains employed

So what can the CSO learn from Pete Carroll?

First and foremost, have back to back winning seasons. Since you don’t have seasons, you’ll need to do something else that builds executive confidence in your decision making. (Nothing builds confidence like success.)

Second, you don’t need perfect success, you need successful prediction and follow-through. Gunnar Peterson has a great post about the security VP winning battles. As you start winning battles, you also need to predict what will happen. “My team will find 5-10 really important issues, and fixing them pre-ship will save us a mountain of technical debt and emergency events.” Pete Carroll had that—a system that worked.

Executives know that stuff happens. The best laid plans…no plan ever survives contact with the enemy. But if you routinely say things like “one vuln, and it’s over, we’re pwned!” or “a breach here could sink the company!” you lose any credibility you might have. Real execs expect problems to materialize.

Lastly, after what had to be an excruciating call, he took the conversation to next year, to building the team’s confidence, and not dwelling on the past.

What Pete Carroll has is a record of delivering what executives wanted, rather than delivering excuses, hyperbole, or jargon. Do you have that record?

* Admittedly, it started about 5 seconds after the play, and come on, how could I pass that up? (Ahem)
† I’m aware of the gotcha risk here. I wrote this the day after Sony Pictures Chairman Amy Pascal was shuffled off to a new studio.

Your career is over after a breach? Another Myth, Busted!

I’m a big fan of learning from our experiences around breaches. Claims like “your stock will fall”, or “your customers will flee” are shown to be false by statistical analysis, and I expect we’d see the same if we looked at people losing their jobs over breaches. (We could do this, for example, via LinkedIn and DatalossDB.)

There’s another myth that’s out there about what happens after a breach, and that is that the breach destroys the career of the CISO and the entire security department. And so I’m pleased today to be able to talk about that myth. Frequently, when I bring up breaches and lessons we can learn, people bring up ChoicePoint as the ultimate counterexample. Now, ChoicePoint is interesting for all sorts of reasons, but from a stock price perspective, they’re a statistical outlier. And so I’m extra pleased to be able to discuss today’s lesson with ChoicePoint as our data point.

Last week, former ChoicePoint CISO Rich Baich was [named Wells Fargo’s] first chief information security officer. Congratulations, Rich!

Now, you might accuse me of substituting anecdote for data and analysis, and you’d be sort of right. One data point doesn’t plot a line. But not all science requires plotting a line. Oftentimes, a good experiment shows us things by being impossible under the standard model. Dropping things from the tower of Pisa shows that objects fall at the same speed, regardless of weight.

So Wells Fargo’s announcement is interesting because it provides a data point that invalidates the hypothesis “If you have a breach, your career is over.” Now, some people, less clever than you, dear reader, might try to retreat to a weaker claim “If you have a breach, your career may be over.” Of course, that “may” destroys any predictive value that the claim may have, and in fact, the claim “If [X], your career may be over,” is equally true, and equally useless, and that’s why you’re not going there.

In other words, if a breach always destroys a career, shouldn’t Rich be flipping burgers?

There’s three more variant hypotheses we can talk about:

  • “If you have a breach, your career will require a long period of rehabilitation.” But Rich was leading the “Global Cyber Threat and Vulnerability Management practice” for Deloitte and Touche, which is not exactly a backwater.
  • “If you have a breach, you will be fired for it.” That one is a bit trickier. I’m certainly not going to assert that no one has ever been fired for a breach happening. But it’s also clearly untrue. The weaker version is “if you have a breach, you may be fired for it”, and again, that’s not useful or interesting.
  • “If you have a breach, it will improve your career.” That’s also obviously false, and the weaker version isn’t faslifiable. But perhaps the lessons learned, focus, and publicity around a breach can make it helpful to your career. It’s not obviously dumber than the opposite claims.

So overall, what is useful and interesting is that yet another myth around breaches turns out to be false. So let’s start saying a bit more about what went wrong, and learning more about what’s going wrong.

Finally, again, congratulations and good luck to Rich in his new role!

Top 5 Security Influencers of 2011

I really like Gunnar Peterson’s post on “Top 5 Security Influencers:”

Its December and so its the season for lists. Here is my list of Top 5 Security Influencers, this is the list with the people who have the biggest (good and/or bad) influence on your company and user’s security:

My list is slightly different:

  1. The Person Coding Your App
  2. Your DBA
  3. Your Testers
  4. Your Ops team
  5. The person with the data
  6. Uma Thurman
  7. You

That’s right, without data to argue an effective case for investing in security, you have less influence than Uma Thurman. And even if you have more influence than her, if you want to be in the top 5, you better be the person bringing the data.

As long as we’re hiding everything that might allow us to judge comparative effectiveness, we’re going to continue making no progress.

Ahh, but which Uma?
265446 1020 A
Update: Chris Hoff asks “But WHICH Uma? Kill Bill Uma or Pulp Fiction Uma?” and sadly, I have to answer: The Truth About Cats and Dogs Uma. You remember. Silly romantic comedy where guy falls in love with radio veterinarian Janeane Garofalo, who’s embarrassed about her looks? And Uma plays her gorgeous but vapid neighbor? That’s the Uma with the more influence than you. The one who spends time trying to not be bubbly when her audition for a newscaster job leads off with “hundreds of people feared dead in a nuclear accident?” Yeah. That Uma. Because at least she’s nice to look at while going on about stuff no one cares about. But you know? If you show up with some chops and some useful data to back your claims, you can do better than that.

On the downside, you’re unlikely to ever be as influential as Kill Bill Uma. Because, you know, she has a sword, and a demonstrated willingness to slice the heads off of people who argue with her, and a don’t-care attitude about jail. It’s hard to top that for short term influence. Just ask the 3rd guy trying to code your app, and hoping it doesn’t crash. He’s got eyes for no one not carrying that sword.

The Diginotar Tautology Club

I often say that breaches don’t drive companies out of business. Some people are asking me to eat crow because Vasco is closing its subsidiary Diginotar after the subsidiary was severely breached, failed to notify their reliant parties, mislead people when they did, and then allowed perhaps hundreds of thousands of people to fall victim to a man in the middle attack. I think Diginotar was an exception that proves the rule.

Statements about Diginotar going out of business are tautological. They take a single incident, selected because the company is going out of business, and then generalize from that. Unfortunately, Diginotar is the only CA that has gone out of business, and so anyone generalizing from it is starting from a set of businesses that have gone out of business. If you’d like to make a case for taking lessons from Diginotar, you must address why Comodo hasn’t gone belly up after *4* breaches (as counted by Moxie in his BlackHat talk).

It would probably also be helpful to comment on how Diginotar’s revenue rate of 200,000 euros in 2011 might contribute to it’s corporate parent deciding that damage control is the most economical choice, and what lessons other businesses can take.

To be entirely fair, I don’t know that Diginotar’s costs were above 200,000 euros per year, but a quick LinkedIn search shows 31 results, most of whom have not yet updated their profiles.

So take what lessons you want from Diginotar, but please don’t generalize from a set of one.

I’ll suggest that “be profitable” is an excellent generalization from businesses that have been breached and survived.

[Update: @cryptoki pointed me to the acquisition press release, which indicates 45 employees, and “DigiNotar reported revenues of approximately Euro 4.5 million for the year 2009. We do not expect 2010 to be materially different from 2009. DigiNotar’s audited statutory report for 2009 showed an operating loss of approximately Euro 280.000, but an after-tax profit of approximately Euro 380.000.” I don’t know how to reconcile the press statements of January 11th and August 30th.]

15 Years of Software Security: Looking Back and Looking Forward

Fifteen years ago, I posted a copy of “Source Code Review Guidelines” to the web. I’d created them for a large bank, because at the time, there was no single document on writing or reviewing for security that was broadly available. (This was a about four years before Michael Howard and Dave LeBlanc published Writing Secure Code or Gary McGraw and John Viega published Building Secure Software.)

So I assembled what we knew, and shared it to get feedback and help others. In looking back, the document describes what we can now recognize as an early approach to security development lifecycles, covering design, development, testing and deployment. It even contains a link to the first paper on fuzzing!

Over the past fifteen years, I’ve been involved in software security as a consultant, as the CTO of Reflective, a startup that delivered software security as a service, and as a member of Microsoft’s Security Development Lifecycle team where I focused on improving the way people threat model. I’m now working on usable security and how we integrate it into large-scale software development.

So after 15 years, I wanted to look forward a little at what we’ve learned and deployed, and what the next 15 years might bring. I should be clear that (as always) these are my personal opinions, not those of my employer.

Looking Back

Filling the Buffer for Fun and Profit
I released my guidelines 4 days before Phrack 49 came out with a short article called “Smashing The Stack For Fun And Profit.” Stack smashing wasn’t new. It had been described clearly in 1972 by John P. Anderson in the “Computer Security Technology Planning Study,” and publicly and dramatically demonstrated by the 1988 Morris Worm’s exploitation of fingerd. But Aleph1’s article made the technique accessible and understandable. The last 15 years have been dominated by important bugs which share the characteristics of being easily demonstrated as “undesired functionality” and being relatively easy to fix, as nothing should really depend on them.

The vuln and patch cycle
As a side effect of easily demonstrated memory corruption, we became accustomed to a cycle of proof-of-concept, sometimes a media cycle and usually a vendor response that fixed the issue. Early on, vendors ignored the bug reports or threatened vulnerability finders (who sometimes sounded like they were trying to blackmail vendors) and so we developed a culture of full disclosure, where researchers just made their discoveries available to the public. Some vendors set up processes for accepting security bug reports, with a few now offering money for such vulnerabilities, and we have a range of ways to describe various approaches to disclosure. Along the way, we created the CVE to help us talk about these vulnerabilities.

In some recent work, we discovered that the phrase “CVE-style vulnerability” was a clear descriptor that cut through a lot of discussion about what was meant by “vulnerability.” The need for terms to describe types of disclosure and vulnerabilities is an interesting window into how often we talk about it.

The industrialization of security
One effect of memory corruption vulnerabilities was that it was easy to see that the unspecified functionality were bugs. Those bugs were things that developers would fix. There’s a longstanding, formalist perspective that “A program that has not been specified cannot be
incorrect; it can only be surprising.” (“Proving a Computer System Secure“) That “formalist” perspective held us back from fixing a great many security issues. Sometimes the right behavior was hard to specify in advance. Good specifications are always tremendously expensive (although thats sometimes still cheaper than not having them.) When we started calling those things bugs, we started to fix them. And when we started to fix bugs, we got people interested in practical ways to reduce the number of those bugs. We had to organize our approaches, and discover which ones worked. Microsoft started sharing lots of its experience before I joined up, and that’s helped a great many organizations get started doing software security “at scale.”

Another aspect of the industrialization of security is the massive growth of security conferences. There are again, many types. There are hacker cons, there are vulnerability researcher cons, and there’s industry events like RSA. There’s also a rise in academic conferences. All of these (except BlackHat-style conferences) existed in 1996, but their growth has been spectacular.

Looking forward in software security

Memory corruption
The first thing that I expect will change is our focus on memory corruption vulnerabilities. We’re getting better at finding these early in the development with weakly typed languages, and better at building platforms with randomization built in to make the remainder harder to exploit. We’ll see a resurgence of command injection, design flaws and a set of things that I’m starting to think about as feature abuse. That includes things like Autorun, Javascript in PDFs (and heck, maybe Javascript in web pages), and also things like spam.

Human factors
Human factors in security will become even more obviously important, as more and more decisions will be required of the person because the computer just doesn’t know. Making good decisions is hard, and most of the the people we’ll ask to make decisions are not experts and reasonably prefer to just get their job done. We’re starting to see patterns like the “gold bars” and advice like “NEAT.” I expect we’ll learn a lot about how attacks work, how to build defenses, and coalesce around a set of reasonable expectations of someone using a computer. Those expectations will be slimmer than security experts will prefer, but good science and data will help make reasonable expectations clear.

Embedded systems
As software gets embedded in everything, so will flaws. Embedded systems will come with embedded flaws. The problems will hit not just Apple or Andriod, but cars, centrifuges, medical devices, and everything with code in it. Which will be a good approximation of everything. One thing we’ve seen is that applying modern vulnerability finding techniques to software released without any security testing is like kicking puppies. They’re just not ready for it. Actually, that’s a little unfair. It’s more like throwing puppies into a tank full of laser-equipped sharks. Most things will not have update mechanisms for a while, and when they do, updates will increasingly a battleground.

Patch Trouble
Apple already forces you to upgrade to the “latest and greatest,” and agree to the new EULA, before you get a security update. DRM schemes will block access to content if you haven’t updated. The pressure to accept updates will be intense. Consumer protection issues will start to come up, and things like the Etisalat update for Blackberry will become more common. These combined updates will impact on people’s willingness to accept updates and close windows of software vulnerability.

EULAs
EULA wars will heat up as bad guys get users to click through contracts forbidding them from removing the software. Those bad guys will include actual malware distributors, middle Eastern telecoms companies, and a lot of organizations that fall into a grey area.

Privacy
the interplay between privacy and security will get a lot more complex and nuanced as our public discourse gets less so. Our software will increasingly be able to extract all sorts of data but also to act on our behalfs in all sorts of ways. Compromised software will scribble racist comments on your Facebook wall, and companies like Social Intelligence will store those comments for you to justify ever-after.

Careers
Careers in software security will become increasingly diverse. It’s already possible to specialize in fuzzing, in static or dynamic analysis, in threat modeling, in security testing, training, etc. We’ll see lots more emerge over the next fifteen years.

Things we won’t see

We won’t see substantially better languages make the problem go away. We may move it around, and we may eliminate some of it, but PHP is the new C because it’s easy to write quick and dirty code. We’ll have cool new languages with nifty features, running on top of resilient platforms. Clever attackers will continue to find ways to make things behave unexpectedly.

A lack of interesting controversies.

(Not) looking into the abyss

There’s a couple of issues I’m not touching at all. They include cloud, because I don’t know what I want to say, and cyberwar, because I know what I don’t want to say. I expect both to be interesting.

Your thoughts?

So, that’s what I think I’ve seen, and that’s what I think I see coming. Did I miss important stories along the way? Are there important trends that will matter that I missed?

[Editor’s note: Updated to clarify the kicking puppy analogy.]

Communicating with Executives for more than Lulz

On Friday, I ranted a bit about “Are Lulz our best practice?” The biggest pushback I heard was that management doesn’t listen, or doesn’t make decisions in the best interests of the company. I think there’s a lot going on there, and want to unpack it.

First, a quick model of getting executives to do what you want. I’ll grossly oversimplify to 3 ordered parts.

  1. You need a goal. Some decision you think is in the best interests of the organization, and reasons you think that’s the case.
  2. You need a way to communicate about the goal and the supporting facts and arguments.
  3. You need management who will make decisions in the best interests of the organization.

The essence of my argument on Friday is that 1 & 2 are often missing or under-supported. Either the decisions are too expensive given the normal (not outlying) costs of a breach, or the communication is not convincing.

I don’t dispute that there are executives who make decisions in a way that’s intended to enrich themselves at the expense of shareholders, that many organizations do a poor job setting incentives for their executives, or that there are foolish executives who make bad decisions. But that’s a flaw in step 3, and to worry about it, we have to first succeed in 1 & 2. If you work for an organization with bad executives, you can (essentially) either get used to it or quit. For everyone in information security, bad executives are the folks above you, and you’re unlikely to change them. (This is an intentional over-simplification to let me get to the real point. Don’t get all tied in knots? k’thanks.)

Let me expand on insufficient facts, the best interests of the organization and insufficient communication.

Sufficient facts mean that you have the data you need to convince an impartial or even a somewhat partial world that there’s a risk tradeoff worth making. That if you invest in A over B, the expected cost to the organization will fall. And if B is an investment in raising revenues, then the odds that A happens are sufficiently higher than B that it’s worth taking the risk of not raising revenue and accepting the loss from A. Insufficient facts is a description of what happens because we keep most security problems secret. In happens in several ways, prominent amongst them is that we can’t really do a good job at calculating probability or losses, and that we have distorted views of those probabilities or losses.

Now, one commenter, “hrbrmstr” said: “I can’t tell you how much a certain security executive may have tried to communicate the real threat actor profile (including likelihood & frequency of threat action)…” And I’ll say, I’m really curious how anyone is calculating frequency of threat action. What’s the numerator and denominator in the calculation? I ask not because it’s impossible (although it may be quite hard in a blog comment) but because the “right” values to use for those is subject to discussion and interpretation. Is it all companies in a given country? All companies in a sector? All attacks? Including port-scans? Do you have solid reasons to believe something is really in the best interests of the organization? Do they stand up to cross-examination? (Incidentally, this is a short form of an argument that we make in chapter 4 of the New School of Information Security, which is the book which inspired this blog.)

I’m not saying that hrbrmstr has the right facts or not. I’m saying that it’s important to have them, and to be able to communicate about why they’re the right facts. That communication must include listening to objections that they’re not the right ones, and addressing those. (Again, assuming a certain level of competence in management. See above about accept or quit.)

Shifting to insufficient communication, this is what I meant by the lulzy statement “We’re being out-communicated by people who can’t spell.” Communication is a two-way street. It involves (amongst many other things) formulating arguments that are designed to be understood, and actively listening to objections and questions raised.

Another commenter, “Hmmm” said, “I’ve seen instances where a breach occurred, the cause was identified, a workable solution proposed and OK’d… and months or years later a simple configuration change to fix the issue is still not on the implementation schedule.”

There are two ways I can interpret this. The first is that “Hmmm’s” idea of simple isn’t really simple (insofar as it breaks something else). Perhaps fixing the breach is as cheap and easy as fixing the configurations, but there are other, higher impact things on the configuration management todo list. I don’t know how long that implementation schedule is, nor how long he’s been waiting. And perhaps his management went to clown school, not MBA school. I have no way to tell.

What I do know is that often the security professionals I’ve worked with don’t engage in active listening. They believe their path is the right one, and when issues like competing activities in configuration management are brought up, they dismiss the issue and the person who raised it. And you might be right to do so. But does it help you achieve your goal?

Feel free to call me a management apologist, if that’s easier than learning how to get stuff done in your organization.

Would a CISO benefit from an MBA education?

If a CISO is expected to be an executive officer (esp. for a large, complex technology- or information-centered organization), then he/she will need the MBA-level knowledge and skill. MBA is one path to getting those skills, at least if you are thoughtful and selective about the school you choose. Other paths are available, so it’s not just about an MBA credential.

Otherwise, if a CISO is essentially the Most Senior Information Security Manager, then MBA education wouldn’t be of much value.

This question was introduced recently in an article by Upasana Gupta: Should a CISO Have an MBA? She asked four CISO’s their opinion, and three essentially said “no”, while one said “yes”.  Eric, at Security, Cigars, and FUD blog, posted his opinion here and here.  Basically, he said “no, it’s not necessary as a credential, but some business knowledge might be helpful”.   The opinions offered on Twitter were almost universally “no”.

As a business guy, I was somewhat surprised that much of the discussion and opinions centered on MBA as a credential rather than what knowledge or skills someone would learn in an MBA program.  None of us at New School are a fan of credentials as such, so my interest in this question is on the educational value compared to alternative investments in education

Also following the New School philosophy, I thought I would look for data and evidence rather than just offering my opinion.

To my delight, I found a fairly comprehensive study: THE CHIEF INFORMATION SECURITY OFFICER: AN ANALYSIS OF THE SKILLS REQUIRED FOR SUCCESS, by Dwayne Whitten of Texas A&M University . The paper is worth reading because it gives a good overview of the conflicting values and forces that are affecting CISO hiring, evaluation, and effectiveness.

Specifically, he finds a gap between how CISOs define success and the job duty descriptions. Quoting from his conclusion:

Based on a thorough review of the literature, interviews with security executives, and an analysis of job listings, a comprehensive list of duties and background/experience requirements were found related to the CISO position (see Table 3). The most interesting issue that arose from this research is that business strategy did not make the list of most included job duties. Given the high level of importance given to this by the literature and the executives, it is surprising that it was not listed on the job listings surveyed. Thus, it appears that many of the organizations searching for new CISOs during the research period did not fully understand the importance of including the CISO in the business strategy formulation.  [emphasis added]

This dichotomy seems to relate to how CISOs are viewed.  From one point of view, CISO is equivalent to “Most Senior Information Security Manager”.  That is, they contribute to the organization in exactly the same way as do other information security managers, but only on larger scope.  It is this perspective that is most closely aligned with the opinion that an MBA education would not be helpful.  Instead, it would be more valuable to get deeper education in InfoSec technical aspects — engineering, forensics, incident response — plus regulations, compliance, etc.

Another point of view is that a CISO is an executive officer of the organization, and thus has fiduciary duties to stakeholders regarding the organization’s overall performance, and also has teamwork responsibilities with the other executive officers regarding crucial strategic decisions.

Maybe this is rare in practice, and maybe the “Chief Information Security Officer” title is just another example rampant job title inflation.  But if a CISO in some organizations are expected to perform in this role, then it is not the case that they are not “just another information security manager, only  bigger”.  Their job is qualitatively different and the knowledge gained at a good quality B-school might be just what they need.

To respond to Eric, who said “And I’ve yet to see a course on security risk management in traditional MBA programs”, I offer two examples: 1) James Madison University offers an MBA in Information Security.  2) Worcester Polytechnic Institute offers an MBA concentration in Information Security Management. The WPI MBA course catalog list quite a few that would be directly valuable to a CISO (e.g. “INFORMATION SECURITY MANAGEMENT”, “OPERATIONS RISK MANAGEMENT”, and “E-BUSINESS APPLICATIONS”), plus many that would be indirectly valuable (statistics, change management, negotiations).   (Disclosure: I got my undergraduate degree from WPI.  Their MBA program is very good, esp. for technical managers.)

I’ll close with a comprehension test for CISOs.  Read this workshop report: Embedding Information Security Risk Management into the Extended Enterprise.  It’s the output of 18 CISO discussing the most challenging issues facing them regarding information security across their enterprise and across their supply chain.

I think you’ll see that most of the problems involve analysis and methods go well beyond the typical education and experience of information security managers.  Instead, they require knowledge and skills that are more typically covered in MBA programs — business strategy, economics, finance, organization behavior and change management, organization performance management and incentives, plus business law and public policy.

Conclusion: if a CISO is expected to be an executive officer (esp. for a large, complex technology- or information-centered organization), then he/she will need the knowledge and skill exemplified by the comprehension exercise, above.  MBA is one path to getting those skills, at least if you are thoughtful and selective about the school you choose.  Other paths are available, so it’s not just about an MBA credential.

Otherwise, if a CISO is essentially the Most Senior Information Security Manager, then MBA education wouldn’t be of much value.