Category: careers

Communicating with Executives for more than Lulz

On Friday, I ranted a bit about “Are Lulz our best practice?” The biggest pushback I heard was that management doesn’t listen, or doesn’t make decisions in the best interests of the company. I think there’s a lot going on there, and want to unpack it.

First, a quick model of getting executives to do what you want. I’ll grossly oversimplify to 3 ordered parts.

  1. You need a goal. Some decision you think is in the best interests of the organization, and reasons you think that’s the case.
  2. You need a way to communicate about the goal and the supporting facts and arguments.
  3. You need management who will make decisions in the best interests of the organization.

The essence of my argument on Friday is that 1 & 2 are often missing or under-supported. Either the decisions are too expensive given the normal (not outlying) costs of a breach, or the communication is not convincing.

I don’t dispute that there are executives who make decisions in a way that’s intended to enrich themselves at the expense of shareholders, that many organizations do a poor job setting incentives for their executives, or that there are foolish executives who make bad decisions. But that’s a flaw in step 3, and to worry about it, we have to first succeed in 1 & 2. If you work for an organization with bad executives, you can (essentially) either get used to it or quit. For everyone in information security, bad executives are the folks above you, and you’re unlikely to change them. (This is an intentional over-simplification to let me get to the real point. Don’t get all tied in knots? k’thanks.)

Let me expand on insufficient facts, the best interests of the organization and insufficient communication.

Sufficient facts mean that you have the data you need to convince an impartial or even a somewhat partial world that there’s a risk tradeoff worth making. That if you invest in A over B, the expected cost to the organization will fall. And if B is an investment in raising revenues, then the odds that A happens are sufficiently higher than B that it’s worth taking the risk of not raising revenue and accepting the loss from A. Insufficient facts is a description of what happens because we keep most security problems secret. In happens in several ways, prominent amongst them is that we can’t really do a good job at calculating probability or losses, and that we have distorted views of those probabilities or losses.

Now, one commenter, “hrbrmstr” said: “I can’t tell you how much a certain security executive may have tried to communicate the real threat actor profile (including likelihood & frequency of threat action)…” And I’ll say, I’m really curious how anyone is calculating frequency of threat action. What’s the numerator and denominator in the calculation? I ask not because it’s impossible (although it may be quite hard in a blog comment) but because the “right” values to use for those is subject to discussion and interpretation. Is it all companies in a given country? All companies in a sector? All attacks? Including port-scans? Do you have solid reasons to believe something is really in the best interests of the organization? Do they stand up to cross-examination? (Incidentally, this is a short form of an argument that we make in chapter 4 of the New School of Information Security, which is the book which inspired this blog.)

I’m not saying that hrbrmstr has the right facts or not. I’m saying that it’s important to have them, and to be able to communicate about why they’re the right facts. That communication must include listening to objections that they’re not the right ones, and addressing those. (Again, assuming a certain level of competence in management. See above about accept or quit.)

Shifting to insufficient communication, this is what I meant by the lulzy statement “We’re being out-communicated by people who can’t spell.” Communication is a two-way street. It involves (amongst many other things) formulating arguments that are designed to be understood, and actively listening to objections and questions raised.

Another commenter, “Hmmm” said, “I’ve seen instances where a breach occurred, the cause was identified, a workable solution proposed and OK’d… and months or years later a simple configuration change to fix the issue is still not on the implementation schedule.”

There are two ways I can interpret this. The first is that “Hmmm’s” idea of simple isn’t really simple (insofar as it breaks something else). Perhaps fixing the breach is as cheap and easy as fixing the configurations, but there are other, higher impact things on the configuration management todo list. I don’t know how long that implementation schedule is, nor how long he’s been waiting. And perhaps his management went to clown school, not MBA school. I have no way to tell.

What I do know is that often the security professionals I’ve worked with don’t engage in active listening. They believe their path is the right one, and when issues like competing activities in configuration management are brought up, they dismiss the issue and the person who raised it. And you might be right to do so. But does it help you achieve your goal?

Feel free to call me a management apologist, if that’s easier than learning how to get stuff done in your organization.

Would a CISO benefit from an MBA education?

If a CISO is expected to be an executive officer (esp. for a large, complex technology- or information-centered organization), then he/she will need the MBA-level knowledge and skill. MBA is one path to getting those skills, at least if you are thoughtful and selective about the school you choose. Other paths are available, so it’s not just about an MBA credential.

Otherwise, if a CISO is essentially the Most Senior Information Security Manager, then MBA education wouldn’t be of much value.

Continue reading

A Letter from Sid CRISC – ious

In the comments to “Why I Don’t Like CRISC” where I challenge ISACA to show us in valid scale and in publicly available models, the risk reduction of COBIT adoption, reader Sid starts to get it, but then kinda devolves into a defense of COBIT or something.  But it’s a great comment, and I wanted to address his comments and clarify my position a bit.  Sid writes:


Just imagine (or try at your own risk) this –

Step 1. Carry out risk assessment
Step 2. In your organisation, boycott all COBiT recommendations / requirements for 3-6 months
Step 3. Carry out risk assessment again

Do you see increase in risk? If Yes, then you will agree that adoption of Cobit has reduced the risk for you so far.

You might argue that its ‘a Control’ that ultimately reduces risk & not Cobit.. however I sincerely feel that ‘effectiveness’ of the control can be greatly improved by adopting cobit governance framework & Improvement of controls can be translated into reduced risk.

I can go on writting about how cobit also governs your risk universe, but I am sure you are experienced enough to understand these overlapping concepts without getting much confused.

Nice try, Sid!  However, remember my beef is that Information Risk Management isn’t mature enough.  Thus I’ve asked for “valid scales” (i.e. not multiplication or division using ordinal values) and publicly available models (because the state of our public models best mirrors the maturity of the overall population of risk analysts).

And that’s my point, even if I *give* you the fact that we can make proper point predictions for a complex adaptive system (which I would argue we can’t, thus nullifying every IT risk approach I’ve ever seen), there isn’t a publicly available model that can do Steps One and Three in a defensible manner.  Yet ISACA seems hell-bent on pushing forth some sort of certification (money talks?).  This despite the inability of our industry to even use the correct numerical scales in risk assessment, more or less actually performing risk assessment in a means that can be even used to govern on a strategic level, or even showing an ability to identify key determinants in a population.

Seriously, if you can’t put two analysts with the same information in two separate rooms and have them arrive at the same conclusions given the same data – how can you possibly “certify” anything other than “this person is smart enough to know there isn’t an answer”?


I want to make one thing clear.  My beef isn’t with ISACA, it’s not with COBIT, it’s not with audit.  I think all three of these things are awesome to some degree for some reasons.  And especially, Sid, my beef isn’t COBIT – I’m a big process weenie these days because the data we do have (See Visible Ops for Security) suggests that maturity is  a risk reducing determinant.  However, this is like a doctor telling a fat person that they should exercise based on vague association with a published study of some bias.  How much, what kind, and absolute effectiveness compared to existing lifestyle is (and esp. how to change lifestyle if that is a cause) is still very much a guess.  It’s an expert (if I can call myself an expert) opinion, not a scientific fact.

In the same way your assertion about COBIT fails reasoned scrutiny.  First, there is the element of “luck”.  In what data we do have, we know that while there is a pretty even spread in frequency of representation in data breaches between determined and random attackers.  That latter aspect means that it’s entirely likely that we could dump COBIT and NOT see an increase in outcomes (whether this is an increase in risk is another philosophical argument for another day).

Second, maybe it’s my “lack of experience” but I will admit that I am very confused these days as to a proper definition of IT Security Governance.  Here’s why; there are many definitions (formal, informal) I’ve read about what ITSec G is.  If you argue that it is simply the assignment of responsibility, that’s fine.  If you want to call it a means to mature an organization to reduce risk (as you do above), then we have to apply proper scrutiny towards maturity models, and how the outcomes of those models influence risk assessment outcomes (the wonderful part of your comment there is the absolute recognition of this).  If you want to call it a means to maturity or if ITSec G is an enumeration of the actual processes that “must” be done, then we get to ask “why”.  And once that happens, well, I’m from Missouri – you have to show me.  And then we’re back into risk modeling, which, of course, we’re simply very immature at.

Any way I look at it, Sid, I can’t see how we’re ready for a certification around Information Risk Management.

Side Note: My problem with IT Security Governance is this: If at any point there needs to be some measuring and modeling done to create justification of codified IT Security Governance, then the Governance documentation is really just a model that says “here’s how the world should work” and as a model requires measuring, falsification, and comparative analytics. In other words, it’s just management.  In this case, the management of IT risk, which sounds like a nice way of saying “risk management”.

Don't fight the zeitgeist, CRISC Edition

Some guy recently posted a strangely self-defeating link/troll/flame in an attempt to (I think) argue with Alex and/or myself regarding the relevance or lack thereof of ISACA’s CRISC certification.  Now given that I think he might have been doing it to drive traffic to his CRISC training site, I won’t show him any link love (although I’m guessing he’ll show up in comments and save me the effort).  Still, he called my Dad (“Mr. Howell”) out by name, which is a bit cheeky seeing as how my Dad left the mortal coil some time ago, so I’ll respond on Dear ol’ Dad’s behalf.

Now the funny thing about that is that I had pretty much forgotten all about CRISC, even though we’ve had a lot of fun with it here at the New School and made what I thought were some very good points about the current lack of maturity in Risk Management and why the last thing we need is another process-centric certification passing itself off as expertise.

I went back and re-read the original articles, and I think that they are still spot-on, so I decided that I would instead take another look at CRISC-The-Popularity-Contest and see who has turned out to be right in terms of CRISC’s relevance now that it’s been nine months almost to the day since ISACA announced it.

Quick, dear readers, to the GoogleCave!

Hmm…CRISC isn’t doing so well in the Long Run.  That’s a zero (0) for the big yellow crisc.

Of course, in the Long Run, we’re all dead, so maybe I should focus on a shorter time frame. Also, I see that either Crisco’s marketing team only works in the fall or most people only bake around the holidays.  If you had asked me, I would not have predicted that Crisco had a strong seasonality to it, so take whatever I say here with a bit of shortening.

Let’s try again, this time limiting ourselves to the past 12 months.

Nope…still nothing, although the decline of the CISSP seems to have flattened out a bit.  Also, we can definitely now see the spike in Crisco searches correlating to Thanksgiving and Christmas.  Looks like people don’t bake for Halloween (Too bad.  Pumpkin bread is yummy) and probably don’t bake well for Thanksgiving and Christmas if they have to google about Crisco.

Oh, well.  Sorry, CRISC.

Now, if you’ll excuse me, I have a cake to bake.

P.S.  Yes, I’m aware my screenshots overflow the right margin.  No, I’m not going to fix it.


PREFACE:  You might interpret this blog post as being negative about risk management here, dear readers.  Don’t. This isn’t a diatrabe against IRM, only why “certification” around information risk is a really, really silly idea.
Apparently, my blog about why I don’t like the idea of CRISC has long-term stickiness.  Just today, Philip writes in the comments:
Lets be PROACTIVE instead of critical. I would love to hear about what CAN be a better job practice and skill set that is needed. I am working on both the commercial and Department of Defense and develop programs for training and coaching the skills from MBA to IT Audit and all of technical security for our Certification of Information Assurance Workforce and conduct all the CISM/CISA training and review courses for ISACA in both commercial and military environments. I have worked on Risk Management for years at ERM as well as IT Security/Risk, and A common theme in all of this is RISK MANAGEMENT. When I discuss the Value of IT with MBA students or discuss CMMI with MIS students or development houses, or discuss why ITIL/Cobit or other discuss with business managers what will keep them from reaching their goals and objectives, it is ALL risk management put into a different taxonomy that that particular audience can understand.
I have not been impressed with the current Risk Management certifications that are available. I did participate in the job task analysis of ISACA (which is a VERY positive thing about how ISACA keeps their certifications) more aligned to practice. It is also not perfect, but I think it is a start. If we contribute instead of just complain, it can get better, or we can create something better. What can be better?
So Alex I welcome a personal dialog with you or others on what and how we can do it better. I can host a web conference and invite all who want to participate (upto 100 attendee capacity).
I’ll take you up on that offer, Philip.  Unfortunately, it’s going to be a very short Webex, because the answer is simple, “you can’t do risk certification better because you shouldn’t be doing it in the first place.”
That was kind of the point of my blog posts.
Just to be clear:
In IT I’m sort of seeing 2 types of certifications:
  1. Process based certifications (I can admin a checkpoint firewall, or active directory or what not)
  2. Domain knowledge based certifications (CISA, CISM)
The problems with a risk management certification are legion.  But to highlight a few in the context of Certifying individuals:
A).  Information Risk Management is not an “applied” practice of two domains.  CISM, CISA, and similar certs are mainly, you know how to X – now apply it to InfoSec.  IRM, done with more than a casual hand wave towards following a process because you have to, is much more complex than these, requiring more than just mashing up, say, “management” and “security”, or “auditing” and “security”.
(In fact, I’d argue that IRM shouldn’t be part of an MIS course load, rather it’s own tract with heavier influences from probability theory, history of science, complexity theory, economics, and epidemiology than, say, Engineering, Computer Science or MIS.)
B).  IRM is not a “process”. Now obviously certain risk management standards are a process. In my opinion, most risk management standards are nothing BUT a re-iteration of a Plan/Do/Check/Act process. And just to be clear, I have no problems if you want to go get certified in FAIR or OCTAVE or Blahdity-Blah – I’m all for that.  That shows that you’ve studied a document and can regurgitate the contents of that document, presumably on demand, and within the specific subjective perspective of those who taught you.
And similarly if ISACA wants to “certify” that someone can take their RiskIT document and be a domain expert at it, groovy.  Just don’t call that person “Certified in Risk and Information Systems Control™because they’re not.  They’re “Certified in our expanded P/D/C/A cycle that is yet another myopic way to list a bajillion risk scenarios in a manner you can’t possibly address before the Sun exhausts it’s supply of helium.” “TM”
Look, as my challenge to quantify the impact of risk reduction of a COBIT program suggests, IRM is more than these standards.
And I gotta be clear here, you’ve hit a pet peeve of mine, the whole “Let’s be PROACTIVE” thing.  First, criticism and dis-proof is part of the natural evolution of ideas.  To act like it isn’t is kinda bogus.  And like I said above, you’re assuming that there is something we should be doing about individual certification instead of CRISC – but THERE ISN’T ANY ALTERNATE, AND THERE SHOULD’NT BE.  You’re saying, “let’s verify people can ride their Unicorns properly into Chernobyl” and assuming I’m saying, you know, “maybe we shouldn’t ride Unicorns”.  I’m not.  I’m saying “we shouldn’t go to Chernobyl regardless of the means of transportation”.
And in terms of what we CAN do, well in my eyes – that’s SOIRA.  Now don’t get me wrong, as best as I understood Jay’s vision, it’s not a specific destination, it’s just a destination that isn’t Chernobyl.  I don’t know where it is going yet Phil, but I’m optimistic that Kevin, Jay, John, and Chris are pretty capable of figuring it out, and doing so because of passion, not because they want to sell more memberships, course materials, or certifications.  Either way, I’m just along for the ride, interested in driving when others get tired and playing a few mix tapes along the way.

How to Get Started In Information Security, the New School Way

There have been a spate of articles lately with titles like “The First Steps to a Career in Information Security” and “How young upstarts can get their big security break in 6 steps.”

Now, neither Bill Brenner nor Marisa Fagan are dumb, but both of their articles miss the very first step. And it’s important to talk about that first step when talking about first steps in a career:

Do something useful.

Some ideas:

  • Write a new tool
  • Add an awesome UI to an existing tool
  • Break something interesting and responsibly disclose it*
  • Get more data out there
  • Analyze existing data in a new and thought-provoking way

We have enough people in infosec who are famous for being famous, or famous for being controversial. If you want to stand out from the pack, do something to move the field forward. Share useful work.

You’ll stand out a lot better than people adding to the chorus.

* You want to disclose it responsibly because it avoids a whole silly debate which detracts from attention to your work.

On the Assimilation Process

Three years and three days ago I announced that “I’m Joining Microsoft.” While I was interviewing, my final interviewer asked me “how long do you plan to stay?” I told him that I’d make a three year commitment, but I really didn’t know. We both knew that a lot of senior industry people have trouble finding a way to be effective in Microsoft’s culture.

So I wanted to pipe up and say I’m having a heck of a lot of fun, and have found places and ways to be effective. I’m getting to develop and share things like our SDL Threat Modeling Tool, and I get to be very transparent about the drivers and decisions that shape it. I’ve got some even cooler stuff in the pipeline, which I’m hoping will be public in the next year or so. My management (which has shifted a little) is supportive of me having two external blogs.

It’s been a heck of a ride so far. Dennis Fisher asked a great question to close this Hearsay Podcast, which is what surprised me the most? I was a little surprised by the question, but I’m going to stand by my answer, which is the intensity and openness of internal debate, and how it helps shape the perception that we’re all reading from the same script. It’s because we’ve seen the debate play out, with really well-informed participants, and remember which points were effective.

I can’t wait to see what happens in the next three years.