I just finished an interesting paper, K. Koscher, A. Juels, T. Kohno, and V. Brajkovic. “EPC RFID Tags in Security Applications: Passport Cards, Enhanced Drivers Licenses, and Beyond.”
In the paper, they analyze issues of cloning (easy) read ranges (longer than the government would have you believe) and `design drift’ (a nice way of saying that the Washington State EDL can be read in its protective sleeve). But that’s not what I wanted to talk about. What I want to talk about is the strikingly experimental nature of the paper, and how unfortunately rare that seems to be. Throughout the paper, the authors describe what they did and what they observed. (“..we used an Inpinj Speedway R1000 reader with a Cushcrash S9028PCL circularly polarized antenna..” “The TID reported by our Passport Card is E2 00 34 11 FF B8 00 00 00 02…”)
In far too many papers which purport to be about computer security, there’s a lack of hard detail. Take for example, my own “Experiences Threat Modeling at Microsoft.” While I’m happy with the paper, and it explains a great deal about what we’ve learned, it doesn’t contain nearly as much measurement of threat models as I would have liked. (Of course, figuring out what to measure about threat models was one of the goals of the paper.)
For another example, take the widely reported apon “Overwriting Hard Drive Data: The Great Wiping Controversy,” which doesn’t so much as report what equipment they used. I would not rely on that paper not only for their lack of detail or their wearing their bias on their shoulder, but because demonstrating that Wright, Kleiman and Sundhar can’t figure out how to read a disk is not the same as saying that no one could figure it out. Had they explained how to figure it out, that would be far more conclusive and interesting.
I shouldn’t be struck by descriptions of experiments and facts reported.