In Educated Guesswork, Eric Rescorla writes about one way tickets and the search criteria.
The CAPPS program was created by Northwest airlines, who set the criteria for inclusion. They included one way tickets to enforce their bizarre pricing schemes. This is the same reason they started asking for ID: to cut down on the resale of the other half of a round-trip ticket, which cost the same as a one-way.
CAPPS, incidentally, has been renamed the “free wheelchairs for paraplegic children” program, to make it harder to argue against, and to get around a congressional mandate that the program not be deployed until someone actually thinks it through.
In his comment, Kevin Dick gets it mostly right–there are other items that you want to keep off the planes (pepper spray, for example), but the right technique in a free society involves enabling passengers to fight for their lives, and fortifying the flight deck. There’s a lot that could be done that hasn’t been. For example, consider an “airlock” system, with two doors at the front of the plane, with a restroom inside. The doors open one at a time. There may be an air marshall inside. (A curtain prevents anyone from seeing.) Now hijackers need to get through two doors. They can’t storm the cockpit while the pilots are being fed or using the restroom. This is very expensive. It would require a new bathroom for the high-revenue business travelers up front. It makes a section of plane unusable for reveune generation. But it is entirely free of civil liberties implications for fliers.
Over at Freedom To Tinker, Ed Felten writes about the Wikipedia quality debate.
He takes a sampling of six entries where he’s competent to judge their quality, and assesses them. Two were excellent, one was slightly inaccurate, two were more in depth, but perhaps less accessible than a standard encyclopedia, and one (on the US Microsoft anti-trust case) was error-prone.
Ed writes: “Until I read the Microsoft-case page, I was ready to declare Wikipedia a clear success.” However, I think his experiment is only one-third to one-half done. I think that Ed ought to look up the same 6 entries in another encyclopedia or two, and report back. I’d suggest the Britannica, which is usually considered the gold standard, and perhaps Microsoft’s Encarta, which may be the most widely used.
I can’t do this experiment the way Ed can, because firstly, I don’t have an EB account, and second, because I don’t know all the topics to the depth he does. I could pretend, and perhaps miss errors that he’d catch, or sample six other articles, and perhaps I will over the weekend.
Over at TaoSecurity, Richard writes:
Remember that one of the best ways to prevent intrusions is to help put criminals behind bars by collecting evidence and supporting the prosecution of offenders. The only way to ensure a specific Internet-based threat never bothers your organization is to separate him from his keyboard!
Firstly, I’m very glad that the second, qualifying sentence is there. It provides some context. However, I’m not sure that I care that a specific threat stops, what I care about is that the class of threats go away.
If the odds that a specific criminal hacker goes to jail are low, then the penalties need to be exceptionally severe and well publicised to create a deterrent effect. (This is roughly a criminal attack loss expectancy, which someone smart has done work on.)
We can see that the odds that an attacker goes to jail are relatively small because there is clearly a large attacker population, and very few criminal sentencings. I’m curious how many attacker convictions we’d need each year to change the economics of this and deter 15 year olds from bringing down CNN?
I’ve recently finished The Man Who Shocked the World, a biography of Stanley Milgram. The book’s title refers to the “Authority Experiments,” wherein a researcher pressured a subject to deliver shocks to a victim. The subjects of the experiments, despite expressing feelings that what they were doing was wrong, were generally willing to continue.
Other work Milgram did lead to the “six degrees of separation” meme, insight into mental maps of cities, the “lost letter” technique of assessing public opinion, and the concept of the “familiar stranger.” He was outstanding at creating illuminating experiments in social science.
I learned in reading this book that Milgram had enormous difficulty getting grants. The review committees who essentially gatekeep over government grants wanted him to work from a theory. (Its not clear from the book if they thought research should support a theory, or correctly understood that great research involves undercutting a theory.)
There’s an interesting tie to computer security here, in that there is a group of researchers who do nothing but interesting experiments, whose results and replicability are shared through what is variously called demonstration code, “POC” (proof-of-concept), or “sploit” (short for exploit) code. Many of these researchers use pseudonyms in their publication, and are considered annoying by the computer security establishment (both commercial and academic), whose work they poke holes in.
In contrast, I think these researchers do an important service by demonstrating how security can be broken. If you consider the hypothesis “This software is resistant to attack,” a few bytes of exploit code is an elegant refutation.
I’m reading through NIST SP-800-70 (pdf), the NIST guide to producing security configuration guides. Let me get more coffee before I continue. Thanks for waiting.
“If home users and other users without deep security expertise attempt to apply High Security checklists to their systems, they would typically experience unwanted limitations on system functionality and possibly unrecoverable system damage.”
Can someone explain to me how you can break a system that badly? I mean, sure, it can be hard to get a new boot block, or a new kernel in place, but once you do, you can recover things.
I’m very down on a system message that implies that modifying your computer can cause unrecoverable damage. It inherently inhibits tinkering, perhaps even more than laws do. After all, we see how effective laws against sharing music or drugs are. But scaring someone into not touching that config file with the threatened loss of all their data? There’s a security measure for you!
Or, if you prefer, the original can be found elsewhere. It’s always nice when things I want to abuse like that are in the public domain. (Obligatory Lessig link.)
But beyond that, think how much poorer literature in the computer science field would be if we didn’t have Alice In Wonderland to freely quote from, adapt, and re-imagine.
On the other hand, I think we might have ended up with Adam and Bob talking instead of Alice and Bob (pdf). (For both non-cryptographers in the audience, very early in the public academic study of cryptography, the paper that introduced the RSA system used “Alice” and “Bob” to represet the two people trying to communicate in secret. Alice and Bob, and their ongoing attempts to have a conversation, plot a rebellion, communicate while in jail, and play poker long distance.)
“The time has come,” the Walrus said,
“To talk of many things:
Of shoes–and ships–and sealing-wax–
Of cabbages–and kings–
And why the sea is boiling hot–
And whether pigs have wings.”
“But wait a bit,” the Oysters cried,
“Before we have our chat;
For some of us are out of breath,
And all of us are fat!”
“No hurry!” said the Carpenter.
They thanked him much for that.
“A full text RSS,” the Walrus said,
“Is what we chiefly need:
Excerpts and quotes besides
Are very good indeed–
Now if you’re ready, Oysters dear,
We can begin to feed.”
Bruce Schneier has written insightfully about Olympic security. They’ve spent $1.5 billion, and today’s marathon race was marred by some idiot leaping into the path of the front-runner, and dragging him into the crowd. Its always tempting, and usually wrong, to say that any failure of security could be prevented.
However, this Olympics has seen a large investment in protecting the sponsor’s brands. (See here or here.) I’d be very curious to see how much was spent in this “brand protection” in comparison to say, brand protection for the Olympics as a business endeavor. It seems that the money might have been mis-allocated if scorn for the Olympics grows because of this sort of thing.
Frank Sanache was one of eight Meswaski code talkers. He served in North Africa, and was captured by the Germans. I’m fairly interested in the history of code talkers, and had missed the Army’s use of them.
It turns out that there were codetalkers in the First World War, that German civilains had travelled to the US to learn native languages, and so the system was considered suspect. The Navy claims to have perfected the system with the use of the (more) famous Navajo.
I find the code talker story fascinating because of the confluence of factors that made it important, and the factors that cause it to no longer be relevant. Code talkers mattered greatly because of the rise of radio, and the broadcasting of plans. Anyone familiar with radio reception wanted private communication for their plans. But all the cryptosystems of the day were either slow and cumbersome or useless for more than a few minutes security. The realization that native languages could address these issues was a very clever hack. Today, we have clever cryptosystems in the radio chips that make all of this less interesting. The military also has automatic transcription and translation tools. You can see some of them in action via the TIDES world press reports. They’re not perfect, but it seems that they could perhaps defeat code-talking.
None of which is to detract from the outstanding service that the code talkers gave to the United States.
From Wampum via Weblogsky.