Secret Stash: responses to DoC/NIST 'Cybersecurity and Innovation in the Internet Economy' Notice of Inquiry

There seems to be no notification that these files are publicly available and no web page listing all the submissions. Therefore, unless you know they are there, you won’t find them. But you can find them all through Google using this search string “NOI site:http://www.nist.gov/itl/upload/”

For those of you who keep up with the latest public-private dialog on cyber security research and policy, you might be interested in reading the submitted responses to the Notice of Inquiry, which are now available on the NIST web site. Unfortunately, there seems to be no notification that these files are publicly available and no web page listing all the submissions. Therefore, unless you know they are there, you won’t find them.

But you can find them all through Google using this search string because they put “NOI” into every file name:

NOI site:http://www.nist.gov/itl/upload/

You’ll see official submissions from Microsoft, IBM, Google, Verisgn, Cisco, TechAmerica, US Chamber of Commerce, plus a few submissions from crazy individuals like me.

ID theft, its Aftermath and Debix AfterCare

In the past, I’ve been opposed to calling impersonation frauds “identity theft.” I’ve wondered why the term impersonation isn’t good enough. As anyone who’s read the ID Theft Resource Center’s ‘ID Theft Aftermath’ reports (2009 report) knows that a lot of the problem with longterm impersonation problems is the psychological impact of disassociation from your good name. It’s not just the financial costs of dealing with mistakes (although those are important), it’s the sense of dread in connecting to today’s society and the reputation infrastructures that have been overlaid onto our lives. It’s the fear of victims that they’re perceived as irrationally fearful, whingers or a burden.

And so I want to quote from a blog post from Debix:

It’s Bo here, CEO of Debix. Today, I’m excited to announce another industry first for Debix – a new feature of our OnCall Credit Monitoring™ product called AfterCare™.

The idea came directly from thousands of conversations with our concerned data breach consumers. The number one complaint we receive is about the gap between the “lifetime risk” the consumer perceives when told their identity is breached, and the 1-2 years of credit monitoring normally offered as a remedy.

We always do our best to explain why it is not feasible to provide 5, 10, 20 year or “lifetime” credit monitoring subscriptions, but none of reasons are very satisfying. It is hard for the consumer to feel good about a remedy where the protection expires quickly but the perceived risk lives on. (Original in Debix blog post.)

That’s why I find Debix’s offer of a lifetime of repair to be so exciting. It’s someone on your side through all of that.

In other news about identity theft, there’s an interesting story about the head of Interpol having his ID stolen via Facebook. In the past, I’d be very skeptical of such a claim, but a great many folks present themselves to the world on Facebook, and:

One of the impersonators used the fake profile to obtain information on fugitives targeted in a recent Interpol-led operation seeking on-the-run criminals convicted of serious offences, including rape and murder.

Identity is hard, and all sorts of interesting stuff emerges from that chaos. Today’s news about AfterCare™ is on the good and interesting side of that.

Airplane Crashes Fall Because Experts Pontificate

The New York Times has a story, “Fatal Crashes of Airplanes Decline 65% Over 10 Years:”

…part of the explanation certainly lies in the payoff from sustained efforts by American and many foreign airlines to identify and eliminate small problems that are common precursors to accidents.

If only we did the same for security.

This sat in my drafts folder for a while, and unfortunately, remains relevant.

Book review: "The Human Contribution"

James Reason’s entire career was full of mistakes. Most of them were other people’s. And while we all feel that way, in his case, it was really true. As a professor of psychology, he made a career of studying human errors and how to prevent them. He has a list of awards that’s a full paragraph long, but perhaps most interesting is that he’s an honorary fellow of the Royal College of General Practitioners for his work in reducing medical errors. “The Human Contribution” is a broad, accessible and fun book on the human contribution to errors, failures and crises. At times, it rises from ‘merely’ thought provoking to awe-inspiring.

Part I includes a “mind users guide” including easily triggered failures of thinking, like tip of the tongue state. Part II covers unsafe acts, including (chapter 3) a set of ways of classifying errors, and the differences between rule based mistakes (applying a rule that doesn’t apply) and knowledge based ones, where people don’t know the right solution, and errors are extremely common. Chapter 4 covers violations, including a long discussion of why people violate rules:

For many acts of non-compliance, experience shows that violating is often an easier way of working and brings no obvious bad effects. The benefits are immediate and the costs are seemingly remote and, in the case of accidents, unlikely.

Chapter 5 covers different ways people think about unsafe acts. The “plague model” is that these things just happen. They’re unpredictable and hard to control. The “person model” is focused on individual unsafe acts and their origins. The “legal model” adds a moralistic aspect to the person model that “someone must be punished.” The chapter closes with a system model, showing how individual choices, the organization and its policies and procedures can come together in a variety of ways that influence accidents.

Part III is short, covering accident traps, recurrent accident patterns and culture in chapter 6, and the influence of a few significant accident investigations. (That is, where the investigations were significant for advancing the state of our understanding of accidents.)

Part IV is a rare touch in books on errors. It covers heroic recoveries in four ways: “Training, discipline and leadership,” focusing on two military retreats across long distance. Chapter 9, “Sheer Unadulterated Professionalism” covers in depth the rescue of Titanic survivors and Apollo 13, as well as British Airways flight 09 and BAC1-11 and surgical errors. Chapter 10 “Skill and luck” covers the intersection of skill and luck with the Gimli glider and United 232. When a Boeing 767 flying as Air Canada flight 103 lost power, the pilot was an experienced glider pilot, and the co-pilot had flown out of Gimli. The degree of luck that AC 103 was in range of Gimli, and that the co-pilot knew where the base was, is nearly incalculable. (The base, being closed, was not on the flight charts.) But without the pilots skill at unpowered flight, and his willingness to risk flying a 767 like a glider, the odds of a landing people walked away from were very low. Chapter 11, “Inspired Improvisations” covers the ways in which people’s unique skills and experiences can lead to unusual but effective solutions. The section closes with a chapter on “The ingredients of heroic recovery.” The actions covered here are heroic in many senses, and a fascinating collection of stories. But it’s more than that; it’s a set of lessons which can be extracted and applied elsewhere.

Part V, which closes the book, covers achieving resilience in a chapter on “Individual and Collective Mindfulness” and “In search of safety.”

The book has actually substantially influenced my thinking on product management and the tradeoffs between security, design beauty and time to market. That, perhaps, is another blog post. More importantly, this is an important book, and worth the time of readers of The New School.

It’s not that safety management and risk management are identical, but rather that they can and should inform each other. But the real New School angle to “The Human Contribution” is the underlying premise that we must study the real errors and even near-misses that systems produce, and how people react to them. It is only through that study that we can build systems which will be safe enough to satisfy us.

PS: Someone I spoke with at BlackHat recommended this book. Thank you!

6502 Visual Simulator

In 6502 visual simulator, Bunnie Huang writes:

It makes my head spin to think that the CPU from the first real computer I used, the Apple II, is now simulateable at the mask level as a browser plug-in. Nothing to install, and it’s Open-licensed. How far we have come…a little more than a decade ago, completing a project like this would have resulted in a couple PhDs being awarded, or regarded as trade secret by some big EDA vendor. This is just unreal…but very cool!’

Visual6502.org, via Justin Mason

Fair Warning: I haven't read this report, but…

@pogowasright pointed to “HOW many patient privacy breaches per month?:”

As regular readers know, I tend to avoid blogging about commercial products and am leery about reporting results from studies that might be self-serving, but a new paper from FairWarning has some data that I think are worth mentioning here. In their report, they provide some baseline data on how many patient privacy breaches their clients were experiencing each month. Keeping in mind that many places already had some security and privacy protocols in place and that higher rates are more likely to create customers for them, here’s what they report for four clients that they say are representative cases from their client database of 300 clients:

I haven’t read the report yet, but what really excites me is that they tell us the population they’re monitoring. We can test two hypotheses:

  1. FairWarning customers buy because they know they’re more likely to make a mistake. (This would give us an interesting approximation of an upper bound for their customers, if their customers are capable of accurate self-assessment.)
  2. FairWarning customers are representative. This would be the case if people are unable to accurately assess their risk of a breach, which I think is the case.

Either way, knowing about the population allows us to learn a lot more than we otherwise could, and I commend FairWarning for including the number.

Update: I did give you fair warning. They say they have “over 300 customers.” That ‘over’ makes a big difference. The report also seems to define ‘privacy breach’ narrowly to be unauthorized peeking, and has a remarkably breathless style of promotion. The key message is that monitoring employee access to patient records and ensuring your employees know that they’re being monitored cuts down on peeking.

Use crypto. Not too confusing. Mostly asymmetric.

A little ways back, Gunnar Peterson said “passwords are like hamburgers, taste great but kill us in long run wean off password now or colonoscopy later.” I responded: “Use crypto. Not too confusing. Mostly asymmetric.” I’d like to expand on that a little. Not quite so much as Michael Pollan, but a little.

The first sentence, “use crypto” is a simple one. It means more security requires getting away from sending strings as a way to authenticate people at a distance. This applies (obviously) to passwords, but also to SSNs, mother’s “maiden” names, your first car, and will apply to biometrics. Sending a string which represents an image of a fingerprint is no harder to fake than sending a password. Stronger authenticators will need to involve an algorithm and a key.

The second, “not too confusing” is a little more subtle, because there are layers of confusing. There’s developer confusion as the system is implemented, adding pieces, like captchas, without a threat model. There’s user confusion as to what program popped that demand for credentials, what site they’re connecting to, or what password they’re supposed to use. There’s also confusion about what makes a good password when one site demands no fewer than 10 characters and another insists on no more. But regardless, it’s essential that a a strong authentication system be understood by at least 99% of its users, and that the authentication is either mutual or resistant to replay, reflection and man-in-the-middle attacks. In this, “TOFU” is better than PKI. I prefer to call TOFO “persistence” or “key persistence” This is in keeping with Pollan’s belief that things with names are better than things with acronyms.

Finally, “mostly asymmetric.” There are three main building blocks in crypto. They are one way functions, symmetric and asymmetric ciphers. Asymmetric systems are those with two mathematically related keys, only one of which is kept secret. These are better because forgery attacks are harder; because only one party holds a given key. (Systems that use one way functions can also deliver this property.) There are a few reasons to avoid asymmetric ciphers, mostly having to do with the compute capabilities of really small devices like a smartcard or very power limited devices like pacemakers.

So there you have it: Use crypto. Not too confusing. Mostly asymmetric.