Test post

Over the summer, Adam and I were talking and I said that I’d like a place to do some personal blogging as opposed to things I normally do, which are targeted at one place or another.

I’d like to be able to blither about security, but also about whatever. Photography, cooking, you know, things that most people who blog blog about.

We set this up and I have finally gotten around to making a test post.

So thank you, Adam and the rest of the jazz combo. I’m Jon Callas, and I’m on bari sax and english horn.

Published Data Empowers

There’s a story over at Bloomberg, “Experian Customers Unsafe as Hackers Steal Credit Report Data.” And much as I enjoy picking on the credit reporting agencies, what I really want to talk about is how the story came to light.

The cyberthieves broke into an employee’s computer in September 2011 and stole the password for the bank’s online account with Experian Plc, the credit reporting agency with data on more than 740 million consumers. The intruders then downloaded credit reports on 847 people, said Dana Pardee, a branch manager at the bank. They took Social Security numbers, birthdates and detailed financial data on people across the country who had never done business with Abilene Telco, which has two locations and serves a city of 117,000.

The incident is one of 86 data breaches since 2006 that expose flaws in the way credit-reporting agencies protect their databases. Instead of directly targeting Experian, Equifax Inc. and TransUnion Corp., hackers are attacking affiliated businesses, such as banks, auto dealers and even a police department that rely on reporting agencies for background credit checks.

This approach has netted more than 17,000 credit reports taken from the agencies since 2006, according to Bloomberg.com’s examination of hundreds of pages of breach notification letters sent to victims. The incidents were outlined in correspondence from the credit bureaus to victims in six states — Maine, Maryland, New Hampshire, New Jersey, North Carolina and Vermont. The letters were discovered mostly through public-records requests by a privacy advocate who goes by the online pseudonym Dissent Doe…

There are three key lessons. The first is for those who still say “anonymized, of course.” The second is for those who are ok with naming the victims, and think we’ve mined this ore, and should move on to other things.

So the first lesson is what enabled us to learn this? Obviously, it’s work by Dissent, but it’s more than that. It’s breach disclosure laws. We don’t anonymize the breaches, we report them.

These sorts of random discoveries are only possible when breaches and their details are reported. We don’t know what details are important, and so ensuring that we get descriptions of what happened is highly important. From that, we discover new things.

The second lesson is that this hard work is being done by volunteers, working with an emergent resource. (Dissent’s post on her work is here.) There’s lots of good questions about what a breach law should be. Some proposals for 24 hour notice appear to be being drafted by people who’ve never talked to anyone who’s investigated a breach. There are interesting questions of active investigations, or those few cases where revealing information about the breach could enable attackers to hurt others. But it seems reasonably obvious that the effort put into gathering data from many services is highly inefficient. That data ought to be available in one place, so that researchers like Dissent can spend their time learning new things.

The final lesson is one that we at the New School have been talking about for a while. Public data transforms our profession and our ability to protect people. If I may borrow a line, we’re not at the beginning of the end of that process, we’re at the end of the beginning, and what comes next is going to be awesome.

Compliance Lessons from Lance, Redux

Not too long ago, I blogged about “Compliance Lessons from Lance.” And now, there seems to be dramatic evidence of a massive program to fool the compliance system. For example:

Team doctors would “provide false declarations of medical need” to use cortisone, a steroid. When Armstrong had a positive corticosteroid test during the 1999 Tour de France, he and team officials had a doctor back-date a prescription for cortisone cream for treating a saddle sore. (CNN)

and

The agency didn’t say that Armstrong ever failed one of those tests, only that his former teammates testified as to how they beat tests or avoided the test administrators altogether. Several riders also said team officials seemed to know when random drug tests were coming, the report said. (CNN)

Apparently, this Lance and doping thing is a richer vein than I was expecting.

Reading about how Lance and his team managed the compliance process reminds me of what I hear from some CSOs about how they manage compliance processes.

In both cases, there’s an aggressive effort to manage the information made available, and to ensure that the picture that the compliance folks can paint is at worst “corrections are already underway.”

Serious violations are not something to be addressed, but a part of an us-vs-them team formation. Management supports or drives a frame that puts compliance in conflict with the business goals.

But we have compliance processes to ensure that sport is fair, or that the business is operating with some set of meaningful controls. The folks who impose the business compliance regime are (generally) not looking to drive make-work. (The folks doing the audit may well be motivated to make work, especially if that additional work is billable.)

When it comes out that the compliance framework is being managed this aggressively, people look at it askew.

In information security, we can learn an important lesson from Lance. We need to design compliance systems that align with business goals, if those are winning a race or winning customers. We need compliance systems that are reasonable, efficient, and administered well. The best way to do that is to understand which controls really impact outcomes.

For example, Gene Kim has shown that that three controls out of the 63
in COBIT are key, predicting nearly 60% of IT security, compliance, operational
and project performance. That research which benchmarked over 1300 organizations is now more than 5 years
old, but the findings (and the standard) remains unchanged.

If we can’t get to reality-checking our standards, perhaps drug testing them would make sense.

TSA Approach to Threat Modeling, Part 3

It’s often said that the TSA’s approach to threat modeling is to just prevent yesterday’s threats. Well, on Friday it came out that:

So, here you see my flight information for my United flight from PHX to EWR. It is my understanding that this is similar to digital boarding passes issued by all U.S. Airlines; so the same information is on a Delta, US Airways, American and all other boarding passes. I am just using United as an example. I have X’d out any information that you could use to change my reservation. But it’s all there, PNR, seat assignment, flight number, name, ect. But what is interesting is the bolded three on the end. This is the TSA Pre-Check information. The number means the number of beeps. 1 beep no Pre-Check, 3 beeps yes Pre-Check. On this trip as you can see I am eligible for Pre-Check. Also this information is not encrypted in any way.

Security Flaws in the TSA Pre-Check System and the Boarding Pass Check System.

So, apparently, they’re not even preventing yesterday’s threats, ones they knew about before the recent silliness or the older silliness. (See my 2005 post, “What Did TSA Know, and When Did They Know It?.)”

What are they doing? Comments welcome.

Proof of Age in UK Pilot

There’s a really interesting article by Toby Stevens at Computer Weekly, “Proof of age comes of age:”

It’s therefore been fascinating to be part of a new initiative that seeks to address proof of age using a Privacy by Design approach to biometric technologies. Touch2id is an anonymous proof of age system that uses fingerprint biometrics and NFC to allow young people to prove that they are 18 years or over at licensed premises (e.g. bars, clubs).

The principle is simple: a young person brings their proof of age document (Home Office rules stipulate this must be a passport or driving licence) to a participating Post Office branch. The Post Office staff member checks document using a scanner, and confirms that the young person is the bearer. They then capture a fingerprint from the customer, which is converted into a hash and used to encrypt the customer’s date of birth on a small NFC sticker, which can be affixed to the back of a phone or wallet. No personal record of the customer’s details, document or fingerprint is retained either on the touch2id enrolment system or in the NFC sticker – the service is completely anonymous.

So first, I’m excited to see this. I think single-purpose credentials are important.

Second, I have a couple of technical questions.

  • Why a fingerprint versus a photo? People are good at recognizing photos, and a photo is a less intrusive mechanism than a fingerprint. Is the security gain sufficient to justify that? What’s the quantified improvement in accuracy?
  • Is NFC actually anonymous? It seems to me that NFC likely has a chip ID or something similar, meaning that the system is pseudonymous

I don’t mean to try to allow the best to be the enemy of the good. Not requiring ID for drinking is an excellent way to secure the ID system. See for example, my BlackHat 2003 talk. But I think that support can be both rah-rah and a careful critique of what we’re building.

Running a Game at Work

Friday, I had the pleasure of seeing Sebastian Deterding speak on ‘9.5 Theses About Gamification.’ I don’t want to blog his entire talk, but one of his theses relates to “playful reframing”, and I think it says a lot to how to run a game at work, or a game tournament at a conference.

In many ways, play is the opposite of work. Play is voluntary, with meaningful choices for players. In work, we are often told, to some extent or other, what to do. You can’t order people to play. You can order them to engage in a game, and even make them go through the motions. But you can’t order them to play. At best, you can get them to joylessly minimax their way through, optimizing for points to the best of their ability. And that’s a challenge for someone who wants to use a game, like Elevation of Privilege or Control-Alt-Hack at work.

One of the really interesting parts of the talk was “how to design to allow play,” and I want to share his points and riff off them a little. Bold are his points, to the best of my scribbling ability.

  • Support autonomy. Autonomy, choice, self-control. As Carse says, “if you must play, you cannot play.” So if you want to have people play Elevation of Privilege in a tournament, you could have that as one track, and a talk at the same time. Then everyone in the tournament has a higher chance of wanting to be there.
  • Create a safe space. When everyone is playing, we agree that the game is for its own sake, and take fairness and sportsmanship into account. If the game has massive external consequences, players are less likely to be playful.
  • Meta-communicate: This is play. Let people know that this is fun by telling them that it’s ok to have fun and be silly, that you’re going to go do that.
  • Model attitudes and behavior Do what you just told them: have fun and show that you’re having fun.
  • Use cues and associations. Do things to ensure that people see that what you’re doing is a game. Elevation of Privilege does this with its use of physical cards with silly pictures on them, with each card having a suit and number, and in a slew of other ways.
  • Disrupt standing frames A standing frame is all about the way people currently see the world. Sometimes, to get people into a game frame, you need to
  • Offer generative tools/toys. A generative tool is one that allows people to do varies and unpredictable things with it. So a Rubik’s Cube is less generative than Legos. Of course, pretty much everything is less generative than Legos.
  • Underspecify. So speaking of Legos, you know how the Legos they made 30 years ago were just some basic shapes, and now and then a special curvy piece, while today it seems like every set has a stack of limited use, specialized pieces? That’s under-specification to over-specification. The more you specify, the less room you have for playful exploration.
  • Provide invitations. Invite people to come play, both literally and metaphorically.

The other element of his talk that I thought was really interesting with regards to Elevation of Privilege was how he discussed Caillios‘ ludus/paidia continuum. Ludus is all about the structure of games: these rules, these activities, these scoring mechanisms, while paidia is about play. Consider kids playing with dolls. There are no rules, there’s unstructured interaction, exploration and tumultuousness.

In hindsight, Elevation of Privilege uses cues to bring people into a game space, but elements of the game (connecting threats to a system being threat modeled, rules for riffing on one another’s threats) are really more about playfulness than gamefulness.

The Boy Who Cried Cyber Pearl Harbor

There is, yet again, someone in the news talking about a cyber Pearl Harbor.

I wanted to offer a few points of perspective.

First, on December 6th, 1941, the United States was at peace. There were worries about the future, but no belief that a major attack was imminent, and certainly not a sneak attack. Today, it’s very clear that successful attacks are a regular and effectively accepted part of the landscape. (I’ll come back to the accepted part.)

Second, insanity. One excellent definition of insanity is doing the same thing over and over again and expecting different results. Enough said? Maybe not. People have been using this same metaphor since at least 1991 (thanks to @mattdevost for the link). So those Zeros have got to be the slowest cyber-planes in the history of the cybers. So it’s insanity to keep using the metaphor, but it’s also insanity to expect that the same approaches that have brought us where we are.

Prime amongst the ideas that we need to jettison is that we can get better in an ivory tower, or a secret fusion center where top men are thinking hard about the problem.

We need to learn from each other’s experiences and mistakes. If we want to avoid having the Tacoma Narrows bridge fall again we need to understand what went wrong. When our understanding is based on secret analysis by top men, we’re forced to hope that they got the analysis right. When they show their work, we can assess it. When they talk about a specific bridge, we can bring additional knowledge and perspective to bear.

For twenty years, we’ve been hearing about these problems in these old school ways, and we’re still hearing about the same problems and the same risks.

We’ve been seeing systems compromised, and accepting it.

We need to stop talking about Pearl Harbor, and start talking about Aurora and its other victims. We need to stop talking about Pearl Harbor, and start talking about RSA, not in terms of cyber-ninjas, but about social engineering, the difficulties of upgrading or the value of defense in depth. We need to stop talking about Pearl Harbor and start talking about Buckshot Yankee, and perhaps why all those SIPRnet systems go unpatched. We need to stop talking about the Gawker breach passwords, and start talking about how they got out.

We need to stop talking in metaphors and start talking specifics.

PS: If you want me to go first, ok. Here ya go.

Navigation