What Boards Want in Security Reporting

Sub optimal dashboard 3

Recently, some of my friends were talking about a report by Bay Dynamics, “How Boards of Directors Really Feel About Cyber Security Reports.” In that report, we see things like:

More than three in five board members say they are both significantly or very “satisfied” (64%) and “inspired”(65%) after the typical presentation by IT and security executives about the company’s cyber risk, yet the majority (85%) of board members
believe that IT and security executives need to improve the way they report to the board.”
Only one-third of IT and security executives believe the board comprehends the cyber security information provided to them (versus) 70% of board members surveyed report that they understand everything they’re being told by IT and security executives in their presentations

Some of this is may be poor survey design or reporting: it’s hard to survey someone to see if they don’t understand, and the questions aren’t listed in the survey.

But that may be taking the easy way out. Perhaps what we’re being told is consistent. Security leaders don’t think the boards are getting the nuance, while the boards are getting the big picture just fine. Perhaps boards really do want better reporting, and, having nothing useful to suggest, consider themselves “satisfied.”

They ask for numbers, but not because they really want numbers. I’ve come to believe that the reason they ask for numbers is that they lack a feel for the risks of cyber. They understand risks in things like product launches or moving manufacturing to China, or making the wrong hire for VP of social media. They are hopeful that in asking for numbers, they’ll learn useful things about the state of what they’re governing.

So what do boards want in security reporting? They want concrete, understandable and actionable reports. They want to know if they have the right hands on the rudder, and if those hands are reasonably resourced. (Boards also know that no one who reports to them is every really satisfied with their budget.)

(Lastly, the graphic? Overly complex, not actionable, lacks explicit recommendations or requests. It’s what boards don’t want.)

PCI & the 166816 password

This was a story back around RSA, but I missed it until RSnake brought it up on Twitter: “[A default password] can hack nearly every credit card machine in the country.” The simple version is that Charles Henderson of Trustwave found that “90% of the terminals of this brand we test for the first time still have this code.” (Slide 30 of RSA deck.) Wow.

Now, I’m not a fan of the “ha-ha in hindsight” or “that’s security 101!” responses to issues. In fact, I railed against it in a blog post in January, “Security 101: Show Your List!

But here’s the thing. Credit card processors have a list. It’s the Payment Card Industry Data Security Standard. That standard is imposed, contractually, on everyone who processes payment cards through the big card networks. In version 3, requirement 2 is “Do not use vendor-supplied defaults for system passwords.” This is not an obscure sub-bullet. As far as I can tell, it is not a nuanced interpretation, or even an interpretation at all. In fact, testing procedure 2.1.a of v3 of the standard says:

2.1.a Choose a sample of system components, and attempt to
log on (with system administrator help) to the devices and
applications using default vendor-supplied accounts and
passwords, to verify that ALL default passwords (including
those on … POS terminals…) have been changed. [I’ve elided a few elements of the list for clarity.]

Now, the small merchant may not be aware that their terminal has a passcode. They may have paid someone else to set it up. But shouldn’t that person have set it up properly? The issue is not that the passcodes are not reset, the issue that I’m worried about is that the system appears broken. We appear to have evidence that to get security right, the system requires activity by busy, possibly undertrained people. Why is that still required ten years into PCI?

This isn’t a matter of “checklists replacing security.” I have in the past railed against checklists (including in the book this blog is named after). But after I read “The Checklist Manifesto”, I’ve moderated my views a bit, such as in “Checklists and Information Security.” Here we have an example of exactly what checklists are good for: avoiding common, easy-to-make and easy-to-check mistakes.

When I raised some of these questions on Twitter someone said that the usual interpretation is that the site selects the sample (where they lack confidence). And to an extent, that’s understandable, and I’m working very hard avoid hindsight bias here. But I think it’s not hindsight bias to say that a sample should be a random sample unless there’s a very strong reason to choose otherwise. I think it’s not hindsight bias to note that your financial auditors don’t let you select which transactions are audited.

Someone else pointed out that it’s “first time audits” which is ok, but why, a decade after PCI 1.0, is this still an issue? Shouldn’t the vendor have addressed this by now? Admittedly, may be hard to manage device PINs at scale — if you’re Target with tens of thousands of PIN pads, and your techs need access to the PINs, what do you do to ensure that the tech has access while at the register, but not after they’ve quit? But even given such challenges, shouldn’t the overall payment card security system be forcing a fix to such issues?

All bellyaching and sarcastic commentary aside, if the PCI process isn’t catching this, what does that tell us? Serious question. I have ideas, but I’m really curious as to what readers think. Please keep it professional.

Analyzing The Army's Accidental Test

According to Wired, “Army Practices Poor Data Hygiene on Its New Smartphones, Tablets.” And I think that’s awesome. No, really, not the ironic sort of awesome, but the awesome sort of awesome, because what the Army is doing is a large scale natural experiment in “does it matter?”

Over the next n months, the Pentagon’s IG can compare incidents in the Army to those in the Navy and the Air Force, and see who’s doing better and who’s doing worse. In theory, the branches of the military should all be otherwise roughly equivalent in security practice and culture (compared to, say, Twitter’s corporate culture, or that of Goldman Sachs.)

With that data, they can assess if the compliance standards for smartphones make a difference, and what difference they make.

So I’d like to call on the Army to not remediate any of the findings for 30 or 60 days. I’d like to call on the Pentagon IG to analyze incidents in a comparative way, and let us know what he finds.

Update: I wanted to read the report, which, as it turns out, has been taken offline. (See Consolidated Listing of Reports, which says “Report Number DODIG-2013-060, Improvements Needed With Tracking and Configuring Army Commercial Mobile Devices, issued March 26, 2013, has been temporarily removed from this website pending further review of management comments.”

However, based on the Wired article, this is not a report about breaches or bad outcomes, it’s a story about controls and control objectives.

Spending time or money on those controls may or may not make sense. Without information about the outcomes experienced without those controls, the efficacy of the controls is a matter of opinion and conjecture.

Further, spending time or money on those controls is at odds with other things. For example, the Army might choose to spend 30 minutes training every soldier to password lock their device, or they could spend that 30 minutes on additional first aid training, or Pashtun language, or some other skill that they might, for whatever reason, want soldiers to have.

It’s well past time to stop focusing on controls for the sake of controls, and start testing our ideas. No organization can afford to implement every idea. The Army, the Pentagon IG and other agencies may have a perfect opportunity to test these controls. To not do so would be tragic.

[/update]

Compliance Lessons from Lance, Redux

Not too long ago, I blogged about “Compliance Lessons from Lance.” And now, there seems to be dramatic evidence of a massive program to fool the compliance system. For example:

Team doctors would “provide false declarations of medical need” to use cortisone, a steroid. When Armstrong had a positive corticosteroid test during the 1999 Tour de France, he and team officials had a doctor back-date a prescription for cortisone cream for treating a saddle sore. (CNN)

and

The agency didn’t say that Armstrong ever failed one of those tests, only that his former teammates testified as to how they beat tests or avoided the test administrators altogether. Several riders also said team officials seemed to know when random drug tests were coming, the report said. (CNN)

Apparently, this Lance and doping thing is a richer vein than I was expecting.

Reading about how Lance and his team managed the compliance process reminds me of what I hear from some CSOs about how they manage compliance processes.

In both cases, there’s an aggressive effort to manage the information made available, and to ensure that the picture that the compliance folks can paint is at worst “corrections are already underway.”

Serious violations are not something to be addressed, but a part of an us-vs-them team formation. Management supports or drives a frame that puts compliance in conflict with the business goals.

But we have compliance processes to ensure that sport is fair, or that the business is operating with some set of meaningful controls. The folks who impose the business compliance regime are (generally) not looking to drive make-work. (The folks doing the audit may well be motivated to make work, especially if that additional work is billable.)

When it comes out that the compliance framework is being managed this aggressively, people look at it askew.

In information security, we can learn an important lesson from Lance. We need to design compliance systems that align with business goals, if those are winning a race or winning customers. We need compliance systems that are reasonable, efficient, and administered well. The best way to do that is to understand which controls really impact outcomes.

For example, Gene Kim has shown that that three controls out of the 63
in COBIT are key, predicting nearly 60% of IT security, compliance, operational
and project performance. That research which benchmarked over 1300 organizations is now more than 5 years
old, but the findings (and the standard) remains unchanged.

If we can’t get to reality-checking our standards, perhaps drug testing them would make sense.

Compliance Lessons from Lance

Recently, Lance Armstrong decided to forgo arbitration in his fight against the USADA over allegations of his use of certain performance enhancing drugs. His statement is “Full text of Armstrong statement regarding USADA arbitration.” What I found interesting about the story is the contrast between what might be termed a “compliance” mindset and a “you’re never done” mindset.

The compliance mindset:

I have never doped, and, unlike many of my accusers, I have competed as an endurance athlete for 25 years with no spike in performance, passed more than 500 drug tests and never failed one. — “Lance Armstrong Responds to USADA Allegation

Lance’s fundamental argument is that what matters is the tests that were performed, in accordance with the rules as laid out by the USADA, and that he passed them all.

Now, there’s some pretty specific allegations of cheating, and we can and should think critically about what his former teammates and now authors have to gain by bringing up these allegations.

But there’s a level at which those motivations have nothing to do with the facts. Did they accept delivery of certain banned performance enhancers? (I’m using that phrase because there are lots of accepted performance enhancers, like coffee and gatorade, and I think some of the distinctions are a little silly. However, that’s not the focus of this post.)

What I’d like to talk about is the damage that can come from both the compliance mindset and the “you’re never done” mindset, and what we can take from them.

The compliance mindset is that you perform some set of actions, you pass or fail, and you’re done. (Well, if you fail, you put in place a program to rectify it.) The USADA is illustrating a pursuit of perfection of which I’ve sometimes been guilty. “You’re never fully secure!” “You have to keep looking for problems!”

Neither is the only way to be. In Lance’s case, I think there’s a simple argument: the USADA did its best at the time to ensure a fair race. Lance won a lot of those races. The Orwellian re-write of the official histories by the Ministry of Drugs doesn’t change history.

What matters is the outcome, and in racing, there’s a defined finish line. You make it across the line first, you win. In systems, there’s less of a defined line, but if you make it through another day, week, year without being pwned, you’re winning. All the compliance failures not exploited by the bad guys are risks taken and won. You made it across the finish line.

What’s ugly about the Lance vs USADA case is that it really can’t be resolved.

There’s probably more interesting compliance lessons in this case. I’d love to hear what you think they are.

Checklists and Information Security

I’ve never been a fan of checklists. Too often, checklists replace thinking and consideration. In the book, Andrew and I wrote:

CardSystems had the required security certification, but its security was compromised, so where did things goo wrong? Frameworks such as PCI are built around checklists. Checklists compress complex issues into a list of simple questions. Someone using a checklist might therefore think he had done the right thing, when in fact he had not addressed the problems in depth…Conventional wisdom presented in short checklists makes security look easy.

So it took a while and a lot of recommendations for me to get around to reading “The Checklist Manifesto” by Atul Gawande. And I’ll admit, I enjoyed it. It’s a very well-written, fast-paced little book that’s garnered a lot of fans for very good reasons.

What’s more, much as it pains me to say it, I think that security can learn a lot from the Checklist Manifesto. One objection that I’ve had is that security is simply too complex. But so is the human body. From the Manifesto:

[It] is far from obvious that something as simple as a checklist could be of substantial help. We may admit that errors and oversights occur–even devastating ones. But we believe our jobs are too complicated to reduce to a checklist. Sick people, for instance, are phenomenally more various than airplanes. A study of forty-one thousand trauma patients in the state of Pennsylvania–just trauma patients–found that they had 1,224 different injury-related diagnoses in 32,261 unique combinations. That’s like having 32,261 kinds of airplane to land. Mapping out the proper steps for every case is not possible, and physicians have been skeptical that a piece of paper with a bunch of little boxes would improve matters.

The Manifesto also addresses the point we wrote above, that “someone using a checklist might think he’d done the right thing”:

Plus, people are individual in ways that rockets are not–they are complex. No two pneumonia patients are identical. Even with the same bacteria, the same cough and shortness of breath, the same low oxygen levels, the same antibiotic, one patient might get better and the other might not. A doctor must be prepared for unpredictable turns that checklists seem completely unsuited to address. Medicine contains the entire range of problems–the simple, the complicated, and the complex–and there are often times when a clinician has to just do what needs to be done. Forget the paperwork. Take care of the patient.

So it’s important to understand that checklists don’t replace professional judgement, they supplement it and help people remember complex steps under stress.

So while I think security can learn a lot from The Checklist Manifesto, the lessons may not be what you expect. Quoting the book that inspired this blog again:

A checklist implies that there is an authoritative list of the “right” things to do, even if no evidence of that simplicity exists. This in turn contributes to the notion that information security is a more mature discipline than it really is.

For example, turning back to the Manifesto:

Surgery has, essentially, four big killers wherever it is done in the world: infection, bleeding, unsafe anesthesia, and what can only be called the unexpected. For the first three, science and experience have given us some straightforward and valuable preventive measures we think we consistently follow but don’t.

I think what we need, before we get to checklists, is more data to understand what the equivalents of infection, bleeding and unsafe anesthesia are. Note that those categories didn’t spring out of someone’s mind, thinking things through from first principles. They came from data. And those data show that some risks are bigger than others:

But compared with the big global killers in surgery, such as infection, bleeding, and unsafe anesthesia, fire is exceedingly rare. Of the tens of millions of operations per year in the United States, it appears only about a hundred involve a surgical fire and vanishingly few of those a fatality. By comparison, some 300,000 operations result in a surgical site infection, and more than eight thousand deaths are associated with these infections. We have done far better at preventing fires than infections. [So fire risks are generally excluded from surgical checklists.]

Security has no way to exclude insiders the fire risk. We throw everything into lists like PCI. The group who updates PCI is not provided in depth incident reports about the failures that occurred over the last year or over the life of the failure. When security fails, rather than asking, ‘did the checklist work’, the PCI council declares that they’ve violated the 11th commandment, and are thus not compliant. And so we dan’t improve the checklists. (Compare and contrast: don’t miss the long section of the Manifesto on how Boeing tests and re-tests their checklists.)

One last quote before I close. Gawande surveys many fields, including how large buildings are built and delivered. He talks to a project manager putting up a huge new hospital building:

Joe Salvia had earlier told me that the major advance in the science of construction over the last few decades has been the perfection of tracking and communication.

Nothing for us security thought leaders to learn. But before I tell you to move along, I’d like to offer up an alpha-quality DO-CHECK checklist for improving security after an incident:

  1. Have you addressed the breach and gotten the attackers out?
  2. Have you notified your customers, shareholders, regulators and other stakeholders?
  3. Did you prepare an after-incident report?
  4. Did you use Veris, the taxonomy in Microsoft’s SIR v11 or some other way to clarify ambiguous terms?
  5. Have you released the report so others can learn?

I believe that if we all start using such a checklist, we’ll set up a feedback loop, and empower our future selves to make better, and more useful checklists to help us make things more secure.

Kudos to Ponemon

In the past, we have has some decidedly critical words for the Ponemon Institute reports, such as “A critique of Ponemon Institute methodology for “churn”” or “Another critique of Ponemon’s method for estimating ‘cost of data breach’“. And to be honest, I’d become sufficiently frustrated that I’d focused my time on other things.

So I’d like to now draw attention to a post by Patrick Florer, “Some Thoughts about PERT and other distributions“, in which he says:

What follows are the results of an attempt to answer this question using a small data set extracted from a Ponemon Institute report called “Compliance Cost Associated with the Storage of Unstructured Information”, sponsored by Novell and published in May, 2011. I selected this report because, starting on page 14, all of the raw data are presented in tabular format. As an aside, this is the first report I have come across that publishes the raw data – please take note, Verizon, if you are reading this!

So I simply wanted to offer kudos to the Ponemon Institute for doing this.

I haven’t yet had a chance to dig into the report, but felt that given our past critiques I should take note of a very positive step.

Block Social Media, Get Pwned

At least, that’s the conclusion of a study from Telus and Rotman. (You might need this link instead)

A report in IT security issued jointly by Telus and the Rotman School of Management surveyed 649 firms and found companies that ban employees from using social media suffer 30 percent more computer security breaches than ones that allow free use of sites like Facebook and Twitter.

Counterintuitive? Maybe, but it makes perfect sense when you consider how hooked most of us are on social media, say the study’s authors.

Rotman professor Dr. Walid Hejazi says employees banned from social networks often download software onto company computers allowing them to circumvent firewalls and access forbidden sites. Those programs let employees to tweet on the job but also create security gaps hackers are happy to exploit. (“Being hacked? Your social media policy might be to blame“, Morgan Campbell, The Star)

A quick skim indicates that this study is based on a survey of Canadian companies which received 649 responses. Parts of the study are worrisome. (For example, their classification of breaches types shows 46% had “Virus/Worms/Spyware” but only 9% had “bots,” and 20% had “phishing/pharming” while only 5% had “social engineering attacks”) However, it seems plausible that organizations know that they’re hacked, and that organizations know if they have a social media policy, so the conclusion of a correlation or even causation may be reasonable. At the same time, it may be that there’s a causative effect of security conscious organizations having both better intrusion detection activity and social media policies, or organizations that are more likely to be hacked having more social media policies. I’m going to tentatively discount those hypotheses because the Verizon DBIR tells us that most organizations don’t detect their own hacks.

I also wanted to comment that a great many companies publicise their social media policies, and it’s probably possible to re-do this study with DatalossDB data.

I haven’t read the study in any detail (really!) but since it confirms my biases I decided to blog it early. Those biases include thinking that Angela Sasse’s “personal compliance budget” idea has a lot of explanatory power. Thanks to Bob Blakely for the pointer.

Microsoft Backs Laws Forbidding Windows Use By Foreigners

According to Groklaw, Microsoft is backing laws that forbid the use of Windows outside of the US. Groklaw doesn’t say that directly. Actually, they pose charmingly with the back of the hand to the forehead, bending backwards dramatically and asking, “ Why Is Microsoft Seeking New State Laws That Allow it to Sue Competitors For Piracy by Overseas Suppliers? ” Why, why, why, o why, they ask.

The headline of this article is the obvious reason. Microsoft might not know they’re doing it for that reason. Usually, people with the need to do something, dammit because they fear they might be headed to irrelevancy think of something and follow the old Aristotelian syllogism:

Something must be done.
This is something.
Therefore, it must be done.

It’s pure logic, you know. This is exactly how Britney Spears ended up with Laurie Anderson’s haircut and the US got into policing China’s borders. It’s logical, and as an old colleague used to say with a sigh, “There’s no arguing with logic like that.”

Come on, let’s look at what happens. I run a business, and there’s a law that says that if my overseas partners aren’t paying for their Microsoft software, then Microsoft can sue me, what do I do?

Exactly right. I put a clause in the contract that says that they agree not to use any Microsoft software. Duh. That way, if they haven’t paid their Microsoft licenses, I can say, “O, you bad, naughty business partner. You are in breach of our contract! I demand that you immediately stop using Microsoft stuff, or I shall move you from being paid net 30 to net 45 at contract renegotiation time!” End of problem.

And hey, some of my partners will actually use something other than Windows. At least for a few days, until they realize how badly Open Office sucks.

Gunnar's Flat Tax: An Alternative to Prescriptive Compliance?

Hey everybody!

I was just reading Gunnar Peterson’s fun little back of the napkin security spending exercise, in which he references his post on a security budget “flat tax” (Three Steps To A Rational Security Budget).  This got me to thinking a bit  –

What if, instead of in the world of compliance where we now demand and audit against a de facto ISMS, what if we just demanded an audit of security spend?

Bear with me here…. If/When we demand Compliance to a group of controls, we are insisting that these controls *and their operation* have efficacy.  The emphasis there is to identify compliance “shelfware” or “zombieware”^1.  But we really don’t know other than anecdotes and deduction that the controls are effective against, or alternately more than needed for, a given organization’s threat landscape.  In addition, the effective operation of security controls requires skills and resources beyond their rote existence.  We might buy all these shiny new security controls, but if our department consists of Moe Howard, Larry Fine, and Shemp Howard, well…

Futhermore, there are plenty of controls that we can deduce or even prove are incident reducing that are *not* required when compliance demands an ISMS.  These controls never get implemented because business management now sees security as a diligence function, not a protection function.

So as I was reading Gunnar’s flat tax proposal, I started to really, really like the idea.  Perhaps a stronger alternative would be to simply require that security budget be a “flat tax” on IT spend for a company.  Instead of auditing against a list of controls and their existence, your compliance audit would simply be an exercise around reviewing budget and sanity of security spend.  By sanity, I mean “this security spend” isn’t really on trips to Bermuda, or somehow commandeered by IT for non-security projects.

Now we can argue about how much that tax would be and other details of how this might all work, but at least when I think about this at a high level it’s starting to occur to me that this approach may have several benefits.

  1. It would certainly be simpler to draw an inference as to whether more security spend increases or decreases # of, and impact of, incidents.  Not that this inference still wouldn’t be fraught with uncertainty, just “simpler” and I would question whether it would be less informative than insisting that a prescriptive ISMS has never been breached.
  2. If the “spend” audit consisted of “were the dollars actually spent” and “how sane the spending was” – it would still be up to the CISO to be able to have a defensive strategy (instead of having just a compliance strategy).  The “spend” could still be risk-based.
  3. Similarly this would help enable budget for effective security department investments ( like, say, a metrics program, training, conference attendance, threat intelligence, etc… ) that would otherwise be spend above and beyond what is currently “required”.
  4. This spend would allow security departments to be more agile.  If our ISMS compliance standards don’t change as frequently as the threats they’re supposed to defend against – it’s pretty obvious we’re screwed spending money to defend against last year’s threats. But a flat tax of spend would allow security departments to reallocate funds in the event of new, dynamic threats to the environment.
  5. This might help restart the innovation in security that draconian security standards and compliance requirements have killed. Josh Corman (among others, I’m sure) is famous for pointing out that compliance spend stifles innovation because budgets are allocated towards “must have’s”.  If you were a start up with an innovative new security tool, but it isn’t on the radar of the standards bodies (or won’t be until the new req’s 3 years from now), only the very well funded organizations will buy your product.  If I’m a CISO with a weaker budget and want the innovative product that my compliance masters don’t require, I’ll never buy it –  all my budget is spent trying to prove I can defeat threats from 2 years ago.

1 Compliance shelfware is a security spend that is done but never implemented.  Compliance zombieware is a control or security spend actually implemented, but never really utilized.

“Of course we have log management.  We have to in order to be compliant.  But it’s just zombieware, nobody ever actually reads those logs or does analysis on them…”