Shostack + Friends Blog Archive


Laboratories of Security?

There’s a story in USA Today, “Most fake bombs missed by screeners.” It describes how screeners at LAX find only 25% of bombs, at ORD, they find 40%, and at SFO, 80%:

At Chicago O’Hare International Airport, screeners missed about 60% of hidden bomb materials that were packed in everyday carry-ons — including toiletry kits, briefcases and CD players. San Francisco International Airport screeners, who work for a private company instead of the TSA, missed about 20% of the bombs, the report shows. The TSA ran about 70 tests at Los Angeles, 75 at Chicago and 145 at San Francisco.

I could go on at length about how bad air travel has gotten, and how security theatre is crushing the travel and tourism industries in the US. Rather I’d like to focus on the emergent chaos aspects of this story: the reality that even TSA bureaucracy can’t impose standards on airports, and why that would be a good thing, if they could accept it.

Before I do, I want to comment that missing 75% of the bombs is probably ok. There are very few airliners bombed in the US. I think it’s less than 10 in history. So the issue is not really false negatives, where the screener misses a real fake bomb, but false positives, where the screener shuts down either someone’s day or the airport. Given that every single bomb smuggled past security last year at US airports was fake, they are far more likely than real bombs.

Now, there’s an opportunity for dramatic improvement in the way we run airport security. “Just run them all like they run SFO!” Orin Kerr makes this point, “I would think the real story is the dramatic gap between the performance of TSA employees and private sector employees.”

More importantly, what comes out of this study for me is the emergent chaos of running a large mission like airport security, and the value of that variation for learning.

If all airports were run exactly the same, we’d have missed this opportunity for learning.

So ask yourself, what do I standardize on too much? Where is there too much structure, inhibiting learning? How can we harness chaos, and what emerges? (I talk in more detail about a very similar point in the latest post in my threat modeling series on the SDL blog, “Making Threat Modeling Work Better.”)

Photo: Frisk, by Tim Whyers. (Machine by Tim Hunkin, we’ve mentioned it previously.)

4 comments on "Laboratories of Security?"

  • Yes, let’s privatize airport security, just like the good old days of September 11, 2001.

  • Ken says:

    I truly need to disagree with you. I am also surprised by your lax attitude on this one 🙂
    Since 9/11 the level of attention and focus on the US by groups who play with bombs has increased significantly.
    I would not downplay the potential for a bomb on a US aircraft. Clearly, there is a higher potential for a high jacking. This was proved by the hijacking of 4 planes on 1 day, 9/11.
    I believe that it is only a matte of time, and a short time at that , before the US starts experiencing more serious attempts on its aircraft. It is clear that the crazies have been testing for soft spots in security. soon they will succeed. I can only hope that you, the US, does not experience what we have seen in the Middle East.

  • David Brodbeck says:

    The job of airport security isn’t actually to make it impossible to smuggle a bomb onto an aircraft — that can’t be done at any reasonable cost. Rather, their job is to make it difficult enough, and the risk of failure high enough, that trying to smuggle one on looks like a poor return on investment for a terrorist group.
    I think they’re probably doing a good enough job that the next attack will take a different form. Britain, where we saw not an attempt to smuggle a bomb onto an airplane, but rather an attempt to bomb the terminal, may be a hint of the sort of thing we should be worrying about.

  • Chris says:

    You seem to think that private screening would be worse than what we have now. I’d be curious what actual evidence you have for this position. Near as I can tell, the rate at which actual bad guys have been stopped due to gate screening is statistically indistinguishable today compared to pre-9/11. The reason, of course, is that very very very few bad guys get to the gate to begin with.
    Adam’s general point, I think, is that the large range in efficacy across airports provides an opportunity to study what factors may have led SFO to be more effective and to apply those lessons elsewhere. This approach is worthwhile regardless of whether private, public, or some hybrid eventually is demonstrated empirically to be better.
    The point, if I may belabor it, is that policy decisions with a wide impact and costs in the billions are by definition important. Important decisions should, wherever feasible, be supported by evidence. Here, we have that evidence. Not using it leaves us one step closer to superstition or folklore than we need to be.
    Finally, let me say that the analysis I describe only yields useful information if:
    1. The fake bombs/bombers are similar to the real ones we face.
    2. There are enough of those real ones to justify the cost of interdiction.
    Neither of these is obviously true. An intelligent policymaker would try to assess their truth before embarking upon a redesign of our gate-screening processes.

Comments are closed.