Compliance Lessons from Lance, Redux

Not too long ago, I blogged about “Compliance Lessons from Lance.” And now, there seems to be dramatic evidence of a massive program to fool the compliance system. For example:

Team doctors would “provide false declarations of medical need” to use cortisone, a steroid. When Armstrong had a positive corticosteroid test during the 1999 Tour de France, he and team officials had a doctor back-date a prescription for cortisone cream for treating a saddle sore. (CNN)

and

The agency didn’t say that Armstrong ever failed one of those tests, only that his former teammates testified as to how they beat tests or avoided the test administrators altogether. Several riders also said team officials seemed to know when random drug tests were coming, the report said. (CNN)

Apparently, this Lance and doping thing is a richer vein than I was expecting.

Reading about how Lance and his team managed the compliance process reminds me of what I hear from some CSOs about how they manage compliance processes.

In both cases, there’s an aggressive effort to manage the information made available, and to ensure that the picture that the compliance folks can paint is at worst “corrections are already underway.”

Serious violations are not something to be addressed, but a part of an us-vs-them team formation. Management supports or drives a frame that puts compliance in conflict with the business goals.

But we have compliance processes to ensure that sport is fair, or that the business is operating with some set of meaningful controls. The folks who impose the business compliance regime are (generally) not looking to drive make-work. (The folks doing the audit may well be motivated to make work, especially if that additional work is billable.)

When it comes out that the compliance framework is being managed this aggressively, people look at it askew.

In information security, we can learn an important lesson from Lance. We need to design compliance systems that align with business goals, if those are winning a race or winning customers. We need compliance systems that are reasonable, efficient, and administered well. The best way to do that is to understand which controls really impact outcomes.

For example, Gene Kim has shown that that three controls out of the 63
in COBIT are key, predicting nearly 60% of IT security, compliance, operational
and project performance. That research which benchmarked over 1300 organizations is now more than 5 years
old, but the findings (and the standard) remains unchanged.

If we can’t get to reality-checking our standards, perhaps drug testing them would make sense.

TSA Approach to Threat Modeling, Part 3

It’s often said that the TSA’s approach to threat modeling is to just prevent yesterday’s threats. Well, on Friday it came out that:

So, here you see my flight information for my United flight from PHX to EWR. It is my understanding that this is similar to digital boarding passes issued by all U.S. Airlines; so the same information is on a Delta, US Airways, American and all other boarding passes. I am just using United as an example. I have X’d out any information that you could use to change my reservation. But it’s all there, PNR, seat assignment, flight number, name, ect. But what is interesting is the bolded three on the end. This is the TSA Pre-Check information. The number means the number of beeps. 1 beep no Pre-Check, 3 beeps yes Pre-Check. On this trip as you can see I am eligible for Pre-Check. Also this information is not encrypted in any way.

Security Flaws in the TSA Pre-Check System and the Boarding Pass Check System.

So, apparently, they’re not even preventing yesterday’s threats, ones they knew about before the recent silliness or the older silliness. (See my 2005 post, “What Did TSA Know, and When Did They Know It?.)”

What are they doing? Comments welcome.

Proof of Age in UK Pilot

There’s a really interesting article by Toby Stevens at Computer Weekly, “Proof of age comes of age:”

It’s therefore been fascinating to be part of a new initiative that seeks to address proof of age using a Privacy by Design approach to biometric technologies. Touch2id is an anonymous proof of age system that uses fingerprint biometrics and NFC to allow young people to prove that they are 18 years or over at licensed premises (e.g. bars, clubs).

The principle is simple: a young person brings their proof of age document (Home Office rules stipulate this must be a passport or driving licence) to a participating Post Office branch. The Post Office staff member checks document using a scanner, and confirms that the young person is the bearer. They then capture a fingerprint from the customer, which is converted into a hash and used to encrypt the customer’s date of birth on a small NFC sticker, which can be affixed to the back of a phone or wallet. No personal record of the customer’s details, document or fingerprint is retained either on the touch2id enrolment system or in the NFC sticker – the service is completely anonymous.

So first, I’m excited to see this. I think single-purpose credentials are important.

Second, I have a couple of technical questions.

  • Why a fingerprint versus a photo? People are good at recognizing photos, and a photo is a less intrusive mechanism than a fingerprint. Is the security gain sufficient to justify that? What’s the quantified improvement in accuracy?
  • Is NFC actually anonymous? It seems to me that NFC likely has a chip ID or something similar, meaning that the system is pseudonymous

I don’t mean to try to allow the best to be the enemy of the good. Not requiring ID for drinking is an excellent way to secure the ID system. See for example, my BlackHat 2003 talk. But I think that support can be both rah-rah and a careful critique of what we’re building.

Running a Game at Work

Friday, I had the pleasure of seeing Sebastian Deterding speak on ‘9.5 Theses About Gamification.’ I don’t want to blog his entire talk, but one of his theses relates to “playful reframing”, and I think it says a lot to how to run a game at work, or a game tournament at a conference.

In many ways, play is the opposite of work. Play is voluntary, with meaningful choices for players. In work, we are often told, to some extent or other, what to do. You can’t order people to play. You can order them to engage in a game, and even make them go through the motions. But you can’t order them to play. At best, you can get them to joylessly minimax their way through, optimizing for points to the best of their ability. And that’s a challenge for someone who wants to use a game, like Elevation of Privilege or Control-Alt-Hack at work.

One of the really interesting parts of the talk was “how to design to allow play,” and I want to share his points and riff off them a little. Bold are his points, to the best of my scribbling ability.

  • Support autonomy. Autonomy, choice, self-control. As Carse says, “if you must play, you cannot play.” So if you want to have people play Elevation of Privilege in a tournament, you could have that as one track, and a talk at the same time. Then everyone in the tournament has a higher chance of wanting to be there.
  • Create a safe space. When everyone is playing, we agree that the game is for its own sake, and take fairness and sportsmanship into account. If the game has massive external consequences, players are less likely to be playful.
  • Meta-communicate: This is play. Let people know that this is fun by telling them that it’s ok to have fun and be silly, that you’re going to go do that.
  • Model attitudes and behavior Do what you just told them: have fun and show that you’re having fun.
  • Use cues and associations. Do things to ensure that people see that what you’re doing is a game. Elevation of Privilege does this with its use of physical cards with silly pictures on them, with each card having a suit and number, and in a slew of other ways.
  • Disrupt standing frames A standing frame is all about the way people currently see the world. Sometimes, to get people into a game frame, you need to
  • Offer generative tools/toys. A generative tool is one that allows people to do varies and unpredictable things with it. So a Rubik’s Cube is less generative than Legos. Of course, pretty much everything is less generative than Legos.
  • Underspecify. So speaking of Legos, you know how the Legos they made 30 years ago were just some basic shapes, and now and then a special curvy piece, while today it seems like every set has a stack of limited use, specialized pieces? That’s under-specification to over-specification. The more you specify, the less room you have for playful exploration.
  • Provide invitations. Invite people to come play, both literally and metaphorically.

The other element of his talk that I thought was really interesting with regards to Elevation of Privilege was how he discussed Caillios‘ ludus/paidia continuum. Ludus is all about the structure of games: these rules, these activities, these scoring mechanisms, while paidia is about play. Consider kids playing with dolls. There are no rules, there’s unstructured interaction, exploration and tumultuousness.

In hindsight, Elevation of Privilege uses cues to bring people into a game space, but elements of the game (connecting threats to a system being threat modeled, rules for riffing on one another’s threats) are really more about playfulness than gamefulness.

The Boy Who Cried Cyber Pearl Harbor

There is, yet again, someone in the news talking about a cyber Pearl Harbor.

I wanted to offer a few points of perspective.

First, on December 6th, 1941, the United States was at peace. There were worries about the future, but no belief that a major attack was imminent, and certainly not a sneak attack. Today, it’s very clear that successful attacks are a regular and effectively accepted part of the landscape. (I’ll come back to the accepted part.)

Second, insanity. One excellent definition of insanity is doing the same thing over and over again and expecting different results. Enough said? Maybe not. People have been using this same metaphor since at least 1991 (thanks to @mattdevost for the link). So those Zeros have got to be the slowest cyber-planes in the history of the cybers. So it’s insanity to keep using the metaphor, but it’s also insanity to expect that the same approaches that have brought us where we are.

Prime amongst the ideas that we need to jettison is that we can get better in an ivory tower, or a secret fusion center where top men are thinking hard about the problem.

We need to learn from each other’s experiences and mistakes. If we want to avoid having the Tacoma Narrows bridge fall again we need to understand what went wrong. When our understanding is based on secret analysis by top men, we’re forced to hope that they got the analysis right. When they show their work, we can assess it. When they talk about a specific bridge, we can bring additional knowledge and perspective to bear.

For twenty years, we’ve been hearing about these problems in these old school ways, and we’re still hearing about the same problems and the same risks.

We’ve been seeing systems compromised, and accepting it.

We need to stop talking about Pearl Harbor, and start talking about Aurora and its other victims. We need to stop talking about Pearl Harbor, and start talking about RSA, not in terms of cyber-ninjas, but about social engineering, the difficulties of upgrading or the value of defense in depth. We need to stop talking about Pearl Harbor and start talking about Buckshot Yankee, and perhaps why all those SIPRnet systems go unpatched. We need to stop talking about the Gawker breach passwords, and start talking about how they got out.

We need to stop talking in metaphors and start talking specifics.

PS: If you want me to go first, ok. Here ya go.

Reporting Mistakes

In “New System for Patients to Report Medical Mistakes” the New York Times reports:

The Obama administration wants consumers to report medical mistakes and unsafe practices by doctors, hospitals, pharmacists and others who provide treatment.

Hospitals say they are receptive to the idea, despite concerns about malpractice liability and possible financial penalties for poor performance.

So let’s think about that for just a minute, and think about what we in information security could learn from people who deal with life and death issues. They’re willing to learn from their mistakes, despite some of the downsides. They’re willing to learn because they know what they do is important, and they know that understanding their mistakes will help them help people more.

This is despite real concerns about malpractice, the difficulty of interpreting statistics, etc. For example, intuitively, ER doctors are going to make more mistakes per patient than general practitioners, because of the inherent chaos in their situation.

Now there are two issues that security needs to worry about that don’t concern doctors. First, talking about the exact way to exploit an 0day makes it easier for more people to exploit it. (That’s not to say we shouldn’t talk about 0day, only that doctors talking about SARS and sending around SARS samples doesn’t lead to more infections, except in movies, and that the cost/benefit ratios are more clear there.) Second, there may be active investigations.

Both of these can be addressed, and are addressed in most current breach notification rules. (Although, I do wish that the ‘advice of law enforcement provisions’ required the notification to be renewed every 30 days or so.)

So I’m just gonna be blunt to my colleagues in information security. Let’s get over it. Let’s talk about our mistakes and get off the treadmill.

Choice Point Screening

Stamford Police said Jevene Wright, 29, created a fictitious company called “Choice Point Screening” and submitted false invoices for background checks that were submitted to Noble Americas Corporation, an energy retailer firm located in Stamford. (Patrick Barnard, “The Stamford (CT) Patch“)

I don’t want to minimize the issue here. Assuming the allegations are correct, the company’s assurance in their trust of their employees is diminished, they may face compliance or contractual issues, and they’re out at least 1.4 million dollars, most of which has likely been spent. A good number of folks are having bad days, and I don’t want to add to that.

At the same time, I do have a number of comments.

First, Those background check services sure are expensive! I wonder how many people that was.

Hmmm, according to their website, “In the past six years Noble has grown from 1,500 employees to over 14,000.” I do wonder how many of the “background checks” came back with false allegations of past misconduct. If there were 14,000 people with no red flags, isn’t that something of a red flag in and of itself? I also wonder (in a law school hypothetical sort of way, and assuming with no evidence that Wright or an accomplice fabricated false reports on some people so that his fraud went undetected) what sorts of claims might be available to those denied employment based on those untrue statements?

Second, there’s something of a natural experiment here that lets us assess the value of background checking. Assuming Noble Americas Corporation runs a second set of background checks, I’m very curious to know how well spent that $2m* will have been: how many employees do they fire, having learned of something so heinous that the employee can’t be kept, and how many do they fire, having been handed a reason to get rid of a poor performer? (Naturally, those 2 numbers will be rolled into one.)

Lastly, there’s an interesting social engineering angle here. There’s a real company “ChoicePoint” now part of LexisNexis. (ChoicePoint was made famous for their awesome handling of a 2003 data breach, which this blog diligently covered.) So when naming a false background check company, Choice Point Screening seems like it might be a new brand for the company. An auditor, seeing all those background checks, is unlikely to focus in on the extra space. It’s a nice touch.