A Moment of Silence

what-id-say.jpg
Ahmet Ertegun has passed away. Ertegun founded Atlantic Records because he loved music, and at 83, the BBC reports:

He suffered a head injury when he fell at a Rolling Stones concert at New York’s Beacon Theatre in October, and died after slipping into a coma. (Emphasis added.)

His book “What I’d Say: The Atlantic Records Story” is both a beautiful coffee table book, and a record of the rise of jazz, blues and rock. By the 1970s, the world seemed to be veering off in directions that Etergun didn’t fully understand, but that’s ok: He’d been a taste-maker and passionate advocate for his artists for a full half-century.

If you still own records or CDs, go take a look at how many of the artists you love released on Atlantic. Play some. It’s what Ahmet would have wanted: to be remembered for the music he brought us.

Infosec Incentives for People

So there’s been discussion here recently of how to motivate security professionals to do better on security. I think it’s also worthwhile to look at normal people. And conviniently, Bruce Schneier does so in his Wired column this month, “MySpace Passwords Aren’t So Dumb.” He looks at how MySpace users do in their passwords versus corporate users, and finds MySpace users have better passwords:

On the other hand, the MySpace demographic is pretty young. Another password study (.pdf) in November looked at 200 corporate employee passwords: 20 percent letters only, 78 percent alphanumeric, 2.1 percent with non-alphanumeric characters, and a 7.8-character average length. Better than 15 years ago, but not as good as MySpace users. Kids really are the future.

I’d like to offer up a different reason: MySpace users have a reason to care about the security of the information they offer up to MySpace that’s more compelling than policies and cajoling from the security folks, and it shows. How can we learn from that?

(After I wrote this, I noticed some similar comments on the version on Bruce’s blog.)

When Security Collides With Engineering (Responsible Disclosure Redux)

suhosin-frame.jpg
Stefan Esser announced earlier this week that he was retiring from security@php.net citing irreconcilable differences with the PHP group on how to respond to security issues within PHP.
Of particular interest is that he will be making changes to how he handles security advisories for PHP (emphasis mine):

For the ordinary PHP user this means that I will no longer hide the slow response time to security holes in my advisories. It will also mean that some of my advisories will come without patches available, because the PHP Security Response Team refused to fix them for months. It will also mean that there will be a lot more advisories about security holes in PHP.

Since Stefan has locked out commenting his post, I’ll ask here:
Stefan, are you planning on providing workarounds for the advisories that don’t yet have patches? How are you planning on balancing the need for users to know versus broader exposure of a weakness? While I fully support your desire for full disclosure, I’m curious, what is too long? Where do you draw the line now that you’ve stepped further away from the project?
[Update: And more questions after talking to Adam. Why is PHP unable to unwilling to do security to Stefan’s standards? I’d love to hear both sides of this story…
Also what does this mean for PHP users? How bad off are they anyways?]
[Image is from Stefan’s Suhosin Hardened PHP Project.]

Cost-Benefits, Incentives, and Knowing What to Do

Adam quoted some interesting thinking about infosec incentives. However, I’m not sure it’s that simple. Gordon and Loeb say that you shouldn’t spend more than 37% of an expected loss.
However, at last summer’s WEIS (Workshop on the Economics of Information Security), Jan Willemson published a paper, “On the Gordon & Loeb Model for Information Security Investment.” In it, Willemson directly challenges the 37% number.
Here’s Willemson’s abstract:

In this paper we discuss a simple and general model for evaluating optimal investment level in information security proposed by Gordon and Loeb. The authors leave an open question, whether there exists
some universal upper limit for the level of optimal security investments compared to the total cost of the protected information set. They also conjecture that if such a level exists, it could be 1/e ~= 36.8%. In this paper, we disprove this conjecture by constructing an example where the required investment level of up to 50% can be necessary. By relaxing the original requirements of Gordon and Loeb just a little bit, we are also able to show that within their general framework examples achieving levels arbitrarily close to 100% exist.

So here’s the first problem — that it may behoove one to spend more than 37%.
The next problem that I see is the whole nature of an expected loss. How do I know what to expect? I’m a cynic, so I can see using some math. If there is a 2% chance that any of my employees will lose a laptop, there’s a 40% chance that a laptop has personal data on it, and I have 10,000 employees, then I expect to have 200 employees lose laptops, and 80 of them are going to cause me a problem. That’s bad. It is only another matter to take the Ponemon $182/name number and multiply that by the number of names, and I have a dollar figure.
To me, the right way to solve this problem is to put some sort of disk encryption on those laptops. Just (heh, just) deploy that and Alice is your auntie. No incentive plan needed.
As a last problem, do I really want to deal with an incentive plan? Incentive plans have evil senses of humor. The people affected by them will inevitably do things based not on what is good for the company, but what affects their incentive plan. If we also assume 100 people in the security department, if they come to my conclusion — encrypt those laptops — they will see $100 in their own pocket for every $1 they save on the software. If they buy software that is cheaper, but less reliable, it can cost the company
Even better for them would be to ban all dangerous data on laptops. We’ve all worked where there were asinine, dictatorial decrees on security. Decrees are cheap. They are, however, not good for the company because the company wants people to be able to work flexibly.
It gets worse, though. Here’s another suggestion:

Here’s the kicker: If there is a breach, the costs come out of the bonus pool first. This would be a bummer, but it would also give you first hand data for budgeting ;).

It also creates in incentive to ignore breaches. If you’re an admin looking over logs at a major university, and you think you see a breach, but aren’t sure — what do you do? Very likely, it’s hope it isn’t a breach, not investigate further. And how are you going to feel when the bonus you were counting on sublimates when Bob over there finds a breach two weeks before the end of the year. Thanks, Bob. Couldn’t you have at least waited until January?
Creating a system where the security team is not looking at security, but how little they spend is not good for security, nor is it good for the company. It has been an encouraging trend in security that we’re starting to think of how good security can be liberating. Security that liberates people is a cost on the security end, but a benefit somewhere else. It might even have indirect benefits, like lowering turnover and making it easier to hire good people.
It is also not good when the incentives reward bureaucratic stiffness, See-No-Evil behavior, and punish people for the conscientious behavior of their co-workers.
Always, always beware when you set up incentives. People will act according to the incentive. (If they didn’t, it wouldn’t be an incentive.) The incentive distracts from the goal. If the incentive points in the direction of the goal, it might be a reasonable approximation of the goal, but it is not the goal. From here we get unintended consequences.

Introducing Mordaxus

Mordaxus is a longtime former cypherpunk with interests in anonymity, security and usability. He’s been involved in some of the biggest brands in security, and has entertaining stories about some of the most interesting events in information security history. He can’t tell those without giving away his secret identity, and so will focus on adding a cynical note to the ensemble.

Mordaxus will be with us for a four week stint. Please give him a warm welcome.

Wikid cool thinking on Infosec incentives

First, assume that you believe, as discussed in Gordon & Loeb’s book Managing Cybersecurity Resources: A Cost-Benefit Analysis and discussed here that an organization should spend no more than 37% of their expected loss on information security. Second, assume that you agree with the Ponemon Institute on the cost of business data breaches: $182 per record. Then, as I have pointed out, you have enough info to figure out what your info sec budget should be, or at least it’s cap.

A few thoughts:

  1. Crap that’s a nice observation. I wish I’d made it.
  2. If you read the full version at “Incentive Plan for an Information Security Team,” then a lot of that paragraph becomes referenced.
  3. You can’t dump the entirety of the funds into the pool–some of it needs to be spent on defensive technologies, processes and people, but you’re certainly aligning the interests of the infosec team and the business.
  4. Finally, we have to wonder if a manager could fire their entire team, and live comfortably in Anguilla on bonuses paid until it backfires. (Nick does address this with smoothing.)

PS: The title? Not a typo-Nick runs WikID Systems.

Navigation