Privacy Rights & Privacy Law

First, the European Court of Human Rights has ruled that the UK’s “DNA database ‘breach of rights’:”

The judges ruled the retention of the men’s DNA “failed to strike a fair balance between the competing public and private interests,” and that the UK government “had overstepped any acceptable margin of appreciation in this regard”.

The court also ruled “the retention in question constituted a disproportionate interference with the applicants’ right to respect for private life and could not be regarded as necessary in a democratic society”.

The Police are aghast that they will not be able to do whatever it takes to solve crimes. Similar past rulings have involved forced confessions, indefinite detention, and a presumption that the accused are guilty until proven innocent. They have put forth figures about how many criminals have been caught. The BBC also reports:

The court says the figures appear “impressive” – but on closer analysis it acknowledges, as the Nuffield Council on Bioethics and GeneWatch UK also have, that they are unconvincing.

The Supreme Court of Newfoundland has ruled that airport searches may not be used for blanket law enforcement purposes:

a reasonable expectation of privacy with respect to the contents of his luggage, save and except for searches by [airport] personnel for items that could be used to jeopardize the security of an aerodrome or aircraft. The drugs and money found in his baggage, which are the subject of this proceeding, are not such items and thus Brian Crisby had a reasonable expectation of privacy.”

This is in stark contrast to the US, where John Perry Barlow was arrested when they found small amounts of drugs in his checked luggage. His appeal was denied, although pages related to that seem to have hit the memory hole.

What’s relevant about this is the difference between Canada and the EU and the US. Privacy law in the US is in disarray. At a Constitutional level, the 4th amendment protections have been utterly eviscerated. At a broader level, privacy laws seem to emerge after bad cases.

The result is expensive investment in poor protection. We can and should do better. It would be possible to put in place a data protection or privacy law which protects privacy and respects the rights of free speech. The key is to recognize the role of the government in enabling correlation and linkage. Privacy law should kick in (hard) when the government is involved, either as the gatherer or guarantor of information. That is, if I have to give my legally documented name or my SSN, I should get strong protection. If I can sign up as Mickey Mouse, then privacy law shouldn’t apply.

However we do it, we need a sane privacy law for the US.

Two Buck Barack

So the New York Times is breathless that “Obama Hauls in Record $750 Million for Campaign.” A lot of people are astounded at the scale of the money, and I am too. In a long, hard campaign, he raised roughly $2.50 per American, and spent slightly less than that.

Unusually, he ended his campaign not in debt, but with a small surplus. Everyone and their brother is now grubbing after that, according to the Times article. If we had a campaign finance system with transparency and accountability for donations, we would likely see spending levels like this more often, and we might well see a broader range of interesting candidate emerge and get voters engaged again.

The reality is while $750 million is a lot of money, it’s also a surprisingly small amount of money. For comparison, the 2008 Federal budget was 2.9 trillion dollars, or roughly 3900 times larger than the budget Obama just oversaw. It’s also only 1/20th of the amount we’re spending to keep Rick Wagoner in a job.

Previously: “Obama vs McDonalds,” “Already Donated the limit,” and way back in 2004, “Shut down these shadowy groups?

Eric Drexler blogging

At Way cool. I look forward to what he has to say.

Unfortunately, one of his early posts falls into the trap of believing that “Computation and Mathematical Proof” will dramatically improve computer security:

Because proof methods can be applied to digital systems, and in particular, will be able to verify the correctness (with respect to a formal specification) of compilers [pdf], microprocessor designs [pdf] (at the digital-abstraction level), and operating system microkernels (the link points to a very important work in progress). Software tools for computer-assisted proof are becoming more usable and powerful, and they already have important industrial-strength applications [pdf]. In a world which increasingly relies on computers for everything from medical devices to national governance, it will be be important to get these foundations right, and to do so in a way that we can trust. If this doesn’t seem important, it may be because we’re so accustomed to living with systems that have built on foundations made of mud, and thinking about a future likewise based on mud. All of us have difficulty imagining what could be developed in a world where computers didn’t crash, were guaranteed to be immune from virus attack, and could safely download code written by the devil himself, and where crucial pieces of software could be guaranteed to not leak data.

The trouble with this approach is that you demonstrably can’t make a useful computer which is immune from virus attack. The proof: a useful computer is one on which I can install software. The user of the computer will have to make a decision about a piece of software. Con men and frausters will continue to convince people to do things which are obviously not in their best interests.

Therefore, however well proven the operating system is, you can’t usefully guarantee them to be free of viruses, because computers are useful when they are generative and social.

That’s implied by his parenthetical “with respect to a formal specification.”

Similarly, the data may be guaranteed not to leak, but can also be guaranteed to be shown to people. (Otherwise, it’s not useful.) Those people can and will leak it. (Ross Anderson’s work on medical systems demonstrates this with a higher level of formality.)

This is not to say that formal methods won’t provide useful results on which we can build. They have, and will continue to in those areas where the problems don’t involve humans, our decisions, or our societies. But human beings are not rational result maximizers who adhere to computer security policies, and all the math in the world won’t change that.

DataLossDB announces awesome new feature

The Data Loss Database, run by the Open Security Foundation, now has a significant new feature: the inclusion of scanned primary source documents.
This means that in addition to being able to determine “the numbers” on an incident, one can also see the exact notification letter used, the reporting form submitted to state government, cover letters directed at (for example) an attorney-general, and the like. Importantly, all the documents have been OCRed, making it possible to search within them.
There are currently several hundred documents in the archive, most of which arrived in the last few days. In order to link the docs to existing breach records quickly, the folks at DataLossDB latched onto a key insight: this is an embarrassingly parallelizable problem. Therefore, a screen is provided to do a bit of matching of scanned docs to existing breach entries. For those without research assistants, crowdsourced data entry is the way to go :^).
If you’re the type of person who is into the details of breaches — and who isn’t? — you should check this out.
Full disclosure: I contributed many of the documents in the archive, and am extremely pleased at what has come of this. The DataLossDB interface is vastly superior to even the vaporware version of my site.

The Costs of Fixing Problems

I enjoyed reading Heather Gerkin’s article: “The Invisible Election.”

I am one of the few people to have gotten a pretty good view of the invisible election, and the reality does not match the reports of a smooth, problem-free election that have dominated the national media. As part of Obama’s election protection team, I spent 18 hours working in the “boiler room,” the spare office where 96 people ran national election day operations. Obama’s election protection efforts, organized by Bob Bauer, were more generously funded, more precisely planned, and better organized than any in recent memory. Over the course of the day, thousands of lawyers, field staff, and volunteers reported the problems they were seeing in polling places across the country. A sophisticated computer program allowed the lawyers and staffers in the boiler room to review these reports in real time.

[…list of problems elided…]

I draw three lessons from the time I spent watching the invisible election unfold, all of which point to the need to make the invisible election visible to the public, to policymakers, and to election administrators themselves.

First, it is essential that the public see the invisible election. We are never going to get traction on reforming our election system until we have a means of making these problems visible to voters. Virtually every media outlet has reported that the election ran smoothly.

First, I’m a huge fan of transparency. I’m not going to advocate sweeping anything under the rug. But I do question if we really need to draw attention to the problems with voting systems before we have consensus on what to do about them?

See, a working democracy is a tremendously valuable asset. It takes years to start up, and (when working) gives us a way to transition between legitimate governments. The thousand years of European wars of succession didn’t allow for much liberty or wealth creation. Democracy has huge value, and it’s under threat. In 2000, we had a real risk of a crisis. If Al Gore had contested the 5-4 vote in Washington, we had no real way to address it and choose a legitimate next leader. Gore understood this, which is why he was clear that we all had to respect the decision, “for the strength of our democracy.” Despite the damage of the Bush years, it was the right call. Because a working democracy is a fragile thing. Trust that the election machinery has gotten the right result and will get the right result next time is an absolutely vital part of the legitimacy of government. Risking it should not be undertaken lightly.

I’ve been at occasional meetings between voting officials and computer scientists for about eight years now. There’s a tremendous gap. The two groups don’t understand each other well, although folks like Avi Rubin are working really hard to bridge that gap. Until there’s a rough political and technological consensus that’s inline with the ‘Help America Vote act’ or its replacement, we should be cautious about undercutting the system we have now.

I also wanted to juxtapose a little with Ryan Singel’s story, “Chertoff: We’re Closing that Boarding-Pass Loophole.” There are now scanners which read a bar code off your boarding pass to make sure you haven’t altered it, and the TSA folks can match your ID to the boarding pass. This was known for years, but driven heavily by Chris Soghoin’s make your own boarding pass toy.

Between the airline software, the scanners and the training, we’ve probably spent tens of millions of dollars to fix the loophole. (Oddly, I haven’t been able to find a statement of the costs.) But the truth is, it’s a silly thing to fix. Good fake ID is easy to get, and will remain easy to get unless we choose a different balance between terrorism prevention, immigration and kids drinking.

Chris has some other entertaining discoveries, which I’m hoping he keeps to himself. I think they’re worth not fixing. That is, the cost of the fix is too high. There are better things to spend money on.

The next few years are going to be rough for the United States. The costs of the Iraq war, our broken health care system, the financial melt-down, the bursting of the housing bubble, infrastructure that’s starting to fail, and global climate change are all going to be competing for a slice of budgets while revenues are falling.

We need to ask ourselves which problems we need to fix, and what the costs of fixing it are really going to be. Not every problem needs a fix, and not every problem that needs fixing needs fixing now.

You versus SaaS: Who can secure your data?

In “Cloud Providers Are Better At Securing Your Data Than You Are…” Chris Hoff presents the idea that it’s foolish to think that a cloud computing provider is going to secure your data better. I think there’s some complex tradeoffs to be made. Since I sort of recoiled at the idea, let me start with the cons:

  1. The cloud vendor doesn’t understand your assets or your business. They may have an understanding of your data or your data classification. They may have a commitment to various SLAs, but they don’t have an understanding of what’s really an asset or what really matters to your business in the way you do. If you believe that IT doesn’t matter, then this doesn’t matter either.
  2. The cloud vendor doesn’t have to admit a problem. They can screw up and let your data out to the world, and they don’t have to tell you. They can sweep it under the rug.

In the middle, slightly con:
Its hard to evaluate security of a cloud vendor. Do you really think a SAS-70 is enough? (Would you tell your CEO, “we passed our SAS-70, nothing to worry about?”) This raises the transaction costs, but that may be balanced by the first pro:

  1. Cloud vendors involve a risk transfer for CIOs. A CIO can write a contract that generates some level of risk transfer for the organization, and more for the CIO. “Sorry, wasn’t me, the vendor failed to perform. I got a huge refund on cost of operations!
  2. Cloud vendors have economies of scale. Both in acquiring and operating the data center, a cloud vendor can bring in economies of scale of operating a few warehouses, rather than a few racks. They can create great operational software to keep costs down, and that software can include patch rollout and rollback, as well as tracking and managing changes, cutting overall MTTR (mean time to repair) for security and other failures.
  3. Cloud vendors could exploit signaling to overcome concerns that they’re mis-representing security state. If a Cloud vendor contracted to publish all their security tickets some interval after closing them, then a prospective customer could compare their security issues to that of the Cloud vendor. Such a promise would indicate confidence in their security stance, and over time, it would allow others to evaluate them.

That last is perhaps a radical view, and I’d like to remind everyone that I’m speaking for the President-Elect and his commitment to transparency, not for my employer.

Virgin America

I flew Virgin Atlantic for the first time recently, for a day trip to San Francisco. I enjoyed it. I can’t remember the last time I actually enjoyed getting on a plane.

The first really standout bit was when the Seattle ground folks put on music and a name that song contest. They handed out free drink tickets for each winner, and a second free drink for singing along through the PA. I was initially a little skeptical — I really wanted some peace and quiet — but it’s better than airport CNN. They seemed to be having a genuinely good time, and they had me smiling by the time I got on the plane.

On the way home, I splurged for a $50 upgrade, figuring that I needed a drink or three, and some food wouldn’t hurt either. The seat was comfy, and the flight attendant was friendly, conversational and appeared to be enjoying himself.

If I lived in San Francisco (their US hub) I’d be a convert. As is, I’ll likely fly them when I can.

If I was one of those pedantic bloggers who tried to tie everything back to the blog title, I’d talk about the value of the unexpected. But really, give them a chance if you’re headed on a route they fly.