Systems Not Sith: Organizational Lessons From Star Wars

In Star Wars, the Empire is presented as a monolith. Storm Troopers, TIE Fighters and even Star Destroyers are supposedly just indistinguishable cogs in a massive military machine, single-mindedly pursuing a common goal. This is, of course, a façade – like all humans, the soldiers and Officers of the Imperial Military will each have their own interests and loyalties. The Army is going to compete with the Navy, the Fighter jocks are going to compete with the Star Destroyer Captains, and the AT-AT crews are going to compete with Storm Troopers.

Read the whole thing at “Overthinking It”: “Systems, Not Sith: How Inter-service Rivalries Doomed the Galactic Empire“. And if you missed it, my take on security lessons from Star Wars.

Thanks to Bruce for the pointer.

Base Rate & Infosec

At SOURCE Seattle, I had the pleasure of seeing Jeff Lowder and Patrick Florer present on “The Base Rate Fallacy.” The talk was excellent, lining up the idea of the base rate fallacy, how and why it matters to infosec. What really struck me about this talk was that about a week before, I had read a presentation of the fallacy with exactly the same example in Kahneman’s “Thinking, Fast and Slow.” The problem is you have a witness who’s 80% accurate, describing a taxi as orange; what are the odds she’s right, given certain facts about the distribution of taxis in the city?

I had just read the discussion. I recognized the problem. I recognized that the numbers were the same. I recalled the answer. I couldn’t remember how to derive it, and got the damn thing wrong.

Well played, sirs! Game to Jeff and Patrick.

Beyond that, there’s an important general lesson in the talk. It’s easy to make mistakes. Even experts, primed for the problems, fall into traps and make mistakes. If we publish only our analysis (or worse, engage in information sharing), then others can’t see what mistakes we might have made along the way.

This problem is exacerbated in a great deal of work by a lack of a methodology section, or a lack of clear definitions.

The more we publish, the more people can catch one anothers errors, and the more the field can advance.

Lessons from Facebook's Stock Slide

So as Facebook continues to trade at a little over half of their market capitalization of 3 months ago, I think we can learn a few very interesting things. My goal here is not to pick on Facebook, but rather to see what we can take away and perhaps apply elsewhere. I think there are three key lessons that we can take away:

  • The Privacy Invasion Gnomes are Wrong
  • Intent Beats Identity
  • Maximizing your IPO returns may be a short term strategy

Let me start with the “Privacy Invasion Gonmes.” The short form of their strategy is:

  1. Gather lots of data on people
  2. ???
  3. Profit

This is, of course, a refinement of the original Gnome Strategy. But what Facebook shows us is:

The Privacy Invasion Gnomes are Wrong

Gathering lots of data on people is a popular business strategy. It underlies a lot of the advertising that powers breathless reporting on the latest philosophical treatise by Kim Kardashian or Paris Hilton.

But what Facebook shows us is that just gathering data on people is actually insufficient as a business strategy, because knowing that someone is a a Democrat or Republican just isn’t that valuable. It’s hard to capitalize on knowing that a user is Catholic or Mormon or Sikh. There’s a limit to how much money you make being able to identify gays who are still in the closet.

All of which means that the security industry’s love affair with “identity” is overblown. In fact, I’m going to argue that intent beats identity every time you can get it, and you can get it if you…keep your eye on the ball.

Intent beats Identity

The idea that if you know someone, you can sell them what they need is a powerful and intuitive one. We all love the place where everyone knows your name. The hope that you can translate it into an algorithm to make it scale is an easy hope to develop.

But many of the businesses that are raking in money hand-over foot on the internet aren’t doing that. Rather, they’re focused on what you want right now. Google is all about that search box. And they turn your intent, as revealed by your search, into ads that are relevant.

Sure, there’s some history now, but fundamentally, there’s a set of searches (like “asbestos” and “car insurance”) that are like kittens thrown to rabid wolves. And each of those wolves will pay to get you an ad. Similarly, Amazon may or may not care who you are when they get you to buy things. Your search is about as direct a statement of intent as it gets.

Let me put it another way:
Internet company revunue per user

The graph is from Seeking Alpha’s post, “Facebook: This Is The Bet You Are Making.”

So let me point out that two of these companies, Facebook and LinkedIn, have great, self-reinforcing identity models. Both use social pressure to drive self-representation on the site to match self-representation in various social situations. That’s pretty close to the definition of identity. (In any event, it’s a lot closer than anyone who talks about “identity issuance” can get.) And both make about 1/9th of what Google does on intent.

Generally in security, we use identification because it’s easier than intent, but what counts is intent. If a fraudster is logging into Alice’s account, and not moving money, security doesn’t notice or care (leaving privacy aside). If Alice’s husband Bob logs in as Alice, that’s a failure of identity. Security may or may not care. If things are all lovey-dovey, it may be fine, but if Bob is planning a divorce, or paying off his mistress, then it’s a problem. Intent beats identity.

Maximizing your IPO returns may be a short term strategy

The final lesson is from Don Dodge, “How Facebook maximized the IPO proceeds, but botched the process.” His argument is a lot stronger than the finger-pointing in “The Man Behind Facebook’s I.P.O. Debacle“. I don’t have a lot to add to Don’s point, which he makes in detail, so you should go read his piece. The very short form is that by pricing as high as they did, they made money (oodles of it) on the IPO, and that was a pretty short-term strategy.

Now, if Facebook found a good way to get intent-centered, and started making money on that, botching the IPO process would matter a lot less. But that’s not what they’re doing. The latest silliness is using your mobile number and email to help merchants stalk find you on the site. That program represents a triumph of identity thinking over intent thinking. People give their mobile numbers to Facebook to help secure their account. Facebook then violates that intent to use the data for marketing.

So, I think that’s what we can learn from the Facebook stock slide. There may well be other lessons in an event this big, and I’d love to hear your thoughts on what they might be.

What can we learn from the social engineering contest?

I was struck by the lead of Kelly Jackson Higgins’ article on the Defcon Social Engineering Contest:

Walmart was the toughest nut to crack in last year’s social engineering competition at the DefCon hacker conference in Las Vegas, but what a difference a year makes: this year, the mega retailer scored the worst among the 10 major U.S. corporations unknowingly targeted in the contest.

So time is a fascinating place to put the credit.

“Last year, the retailers just shut us down big-time, but this year, retail was the most forthcoming,” says Chris “Logan” Hadnagy, a professional social engineer with social-engineer.org who heads up the contest. Walmart and Target ended up with the highest scores, which means they did the worst, he says, with Walmart gaining the dubious distinction of performing the worst by exposing the most information both online and when its employees were cold-called by the social engineering contestants.

It’s almost enough to make me question if a social engineering contest is a replicatable test.

Don’t get me wrong. I don’t mean that the lack of rigor as a condemnation: real world social engineers will fluidly shift tactics to what they think will work, and if you don’t have good technology and processes in place, you’re likely to lose to an APT (amateur persistent talker). But it does raise questions of what we can learn from a contest.

At the same time, I don’t think that a contest structured like this is intended to compare year-on-year performance of an organization.

So what can we learn from the contest?

  • Social engineering works. This may appear to be a “duh”, but we need to start from there because:
  • Our defenses often don’t work. As I discussed at Black Hat, and blogged recently, we need to go beyond the computer and think about the set of attacks that work, not the set of attacks we’re interested in addressing.
  • We should fix that. Even though it’s hard. It’s part of the job your management expects of the security team. Exposing offensive and defensive techniques might create a feedback loop and let us learn to do better.

So let’s look at one of the elements exposed in the contest, and think about how to address it. Let’s start with the “who’s your cleaning company” questions. These are classic. You find out who the cleaning company is, ebay yourself a uniform, and boom, walk through the door to collect circular filing data. Frank Abagnale did it in the 60s, except he used uniform supply companies. And we still, 40 years later, fall for the same sorts of tricks, because we focus too much on the computers and don’t think well about these attacks and the defenses.

Obviously, the uniform attacks matter more if you don’t have badges and strong issuance processes. But even with those, you also need to motivate your cleaning company employees to question someone who shows up without a badge and tries to work. That’s tricky, because it’s un-natural and feels confrontational and suspicious. What’s more, sending a new team member home means each of the cleaners will work that much harder. But they’re part of your security perimeter, so what do you do? You motivate them. Give them a carrot for asking good questions. How? Let’s say someone shows up without a badge. Have a checklist for your receptionist and cleaning crew. Have them call security or a manager. When they do, reward them. Give them a $25 gift card or some other pat on the back. That way, when the social engineer shows up, they get questioned. Maybe it’s apologetic. But you’re aligning their interest with yours, and giving them social reasons to overcome the awkwardness that a question can entail.

In my day job, I’ve spent some time thinking about how to make effective training, and some of this thinking has gone into, and come out of that. For more, including properties of good advice, see “Zeroing in on Malware Propagation Methods,” starting on page 29.

Compliance Lessons from Lance

Recently, Lance Armstrong decided to forgo arbitration in his fight against the USADA over allegations of his use of certain performance enhancing drugs. His statement is “Full text of Armstrong statement regarding USADA arbitration.” What I found interesting about the story is the contrast between what might be termed a “compliance” mindset and a “you’re never done” mindset.

The compliance mindset:

I have never doped, and, unlike many of my accusers, I have competed as an endurance athlete for 25 years with no spike in performance, passed more than 500 drug tests and never failed one. — “Lance Armstrong Responds to USADA Allegation

Lance’s fundamental argument is that what matters is the tests that were performed, in accordance with the rules as laid out by the USADA, and that he passed them all.

Now, there’s some pretty specific allegations of cheating, and we can and should think critically about what his former teammates and now authors have to gain by bringing up these allegations.

But there’s a level at which those motivations have nothing to do with the facts. Did they accept delivery of certain banned performance enhancers? (I’m using that phrase because there are lots of accepted performance enhancers, like coffee and gatorade, and I think some of the distinctions are a little silly. However, that’s not the focus of this post.)

What I’d like to talk about is the damage that can come from both the compliance mindset and the “you’re never done” mindset, and what we can take from them.

The compliance mindset is that you perform some set of actions, you pass or fail, and you’re done. (Well, if you fail, you put in place a program to rectify it.) The USADA is illustrating a pursuit of perfection of which I’ve sometimes been guilty. “You’re never fully secure!” “You have to keep looking for problems!”

Neither is the only way to be. In Lance’s case, I think there’s a simple argument: the USADA did its best at the time to ensure a fair race. Lance won a lot of those races. The Orwellian re-write of the official histories by the Ministry of Drugs doesn’t change history.

What matters is the outcome, and in racing, there’s a defined finish line. You make it across the line first, you win. In systems, there’s less of a defined line, but if you make it through another day, week, year without being pwned, you’re winning. All the compliance failures not exploited by the bad guys are risks taken and won. You made it across the finish line.

What’s ugly about the Lance vs USADA case is that it really can’t be resolved.

There’s probably more interesting compliance lessons in this case. I’d love to hear what you think they are.

Smashing the Future for Fun and Profit

I’d meant to post this at BlackHat. I think it’s worth sharing, even a bit later on:

I’m excited to have be a part of a discussion with others who spoke at the first Blackhat: Bruce Schneier, Marcus Ranum, Jeff Moss, and Jennifer Granick. We’ve been asked to think about what the future holds, and to take lessons from the last 15 years.

I have three themes I want to touch on during the panel:

Beyond the vuln: 15 years ago Aleph One published “Smashing the Stack for Fun and Profit.” In it, he took a set of bugs and made them into a class, and the co-evolution of that class and defenses against it have in many ways defined Black Hat. Many of the most exciting and cited talks put forth new ways to reliably gain execution by corrupting memory, and others bypassed defenses put in place to make such exploitation harder or less useful. That memory corruption class of bugs isn’t over, but the era ruled by the vulnerability is coming to an end. That’s going to be challenging in all sorts of ways, some of which we can predict. First, researcher/organization conversations are going to become even harder, because some things are going to be less clearly bugs, and some will be harder to fix without breaking functionality. Second, secure development activity is going to need to drive threat modeling the way we’ve driven static analysis, and that’s hard because it involves people and their thought patterns. More on that in a second. Third, attackers are going to move more and more to social engineering attacks, and that brings my to my second main point.

Beyond the Computer: We’re going to see more and more attacks that target people, and many people going to hate those talks. The Review Board shares the idea that talks about buying UPS uniforms on eBay suck, and we don’t want them at Black Hat. At the same time, we’re going to need to go where attackers go, and if that’s people, we need to start to learn about people in deeper, and less condescending ways. (This is the end of people claiming UI doesn’t matter by saying “you can’t patch stupid.”) We’re going to need to understand psychology, sociology, cognitive science and more. At the same time as we’re learning to understand people, we’re going to need to learn to influence them. We’re going to need to stop relying on the sticks, and start learning to use carrots. This is why I’ve been engaged in games for the last few years, from Elevation of Privilege to Control-Alt-Hack and others. Getting people to want what we want, rather than grudgingly acquiesce, is going to be a key factor in our success as individuals and as a profession.

Beyond Tittering at Victims: We’ve been hearing for a year or so that we should all just assume breach. That breaches are common and should be expected. Over the next few years, we’re going to go from giggling about breaches to learning from them. We’re going to see more and more details come out about what happened, and we’re going to learn from one another’s mistakes. We’re going to start creating feedback loops that allow us to get better faster, and move away from flaming one another over opinions to arguing over statistical methodologies. My article “The Evolution of Information Security” shows how close we are to getting to a data-driven science of security, and that transformation will involve many sacred cows being roasted, and many of today’s practices abandoned because we’ll show that they don’t work, and more importantly, we’ll be able to see how to replace them.

The Very Model of An Amateur Grammarian

I am the very model of an amateur grammarian
I have a little knowledge and I am authoritarian
But I make no apology for being doctrinarian
We must not plummet to the verbal depths of the barbarian

I’d sooner break my heart in two than sunder an infinitive
And I’d disown my closest family within a minute if
They dared to place a preposition at a sentence terminus
Or sully the Queen’s English with neologisms verminous

For the full sing-along, please see Tom Freeman’s
The very model of an amateur grammarian
.

One more request for help

If someone could suggest a specific way to make the blog title image work to bring you to the home page, that’d be most appreciated.

Update, I think I fixed most of it.

Thanks in particular to commenter “M”, who got me on the path to the fix, removing the inline CSS that the theme put in place via a php function (huh?). Thanks also to @optiqal, @pogowasright, @bwittorf and @37point2 for looking at the issue and offering helpful suggestions.

Theme breakage, help?

The blog header image is repeating because of something in the stylesheets. I can’t see where the bug is. If someone can help out, I’d be much obliged.

Expanded to add: It appears that there’s a computed “repeat” on the bg img which is the header, but why that repeat is being computed is unclear to me, and attempts to insert explicit no-repeats in various ways have not over-ridden the behavior.