The Carpenter Case

On Wednesday, the supreme court will consider whether the government must obtain a warrant before accessing the rich trove of data that cellphone providers collect about cellphone users’ movements. Among scholars and campaigners, there is broad agreement that the case could yield the most consequential privacy ruling in a generation. (“Supreme court cellphone case puts free speech – not just privacy – at risk.”)

Bruce Schneier has an article in the Washington Post, “How the Supreme Court could keep police from using your cellphone to spy on you,” as does Stephen Sachs:

The Supreme Court will hear arguments this Wednesday in Carpenter v. United States, a criminal case testing the scope of the Fourth Amendment’s right to privacy in the digital age. The government seeks to uphold Timothy Carpenter’s conviction and will rely, as did the lower court, on the court’s 1979 decision in Smith v. Maryland, a case I know well.

I argued and won Smith v. Maryland when I was Maryland’s attorney general. I believe it was correctly decided. But I also believe it has long since outlived its suitability as precedent. (“The Supreme Court’s privacy precedent is outdated.”)

I am pleased to have been able to help with an amicus brief in the case, and hope that the Supreme Court uses this opportunity to protect all of our privacy. Good luck to the litigants!

Amicus brief in “Carpenter” Supreme Court Case

“In an amicus brief filed in the U.S. Supreme Court, leading technology experts represented by the Knight First Amendment Institute at Columbia University argue that the Fourth Amendment should be understood to prohibit the government from accessing location data tracked by cell phone providers — “cell site location information” — without a warrant.”

For more, please see “In Supreme Court Brief, Technologists Warn Against Warrantless Access to Cell Phone Location Data.” [Update: Susan Landau has a great blog post “Phones Move – and So Should the Law” in which she frames the issues at hand.]

I’m pleased to be one of the experts involved.

A Privacy Threat Model for The People of Seattle

Some of us in the Seattle Privacy Coalition have been talking about creating a model of a day in the life of a citizen or resident in Seattle, and the way data is collected and used; that is the potential threats to their privacy. In a typical approach, we focus on a system that we’re building, analyzing or testing. In this model, I think we need to focus on the people, the ‘data subjects.’

I also want to get away from the one by one issues, and help us look at the problems we face more holistically.

Feds Sue Seattle over FBI Surveillance

The general approach I use to threat model is based on 4 questions:

  1. What are you working on? (building, deploying, breaking, etc)
  2. What can go wrong?
  3. What are you going to do about it?
  4. Did you do a good job?

I think that we can address the first by building a model of a day, and driving into specifics in each area. For example, get up, check the internet, go to work (by bus, by car, by bike, walking), have a meal out…

One question that we’ll probably have to work on is how to address what can go wrong in a model this general? Usually I threat model specific systems or technologies where the answers are more crisp. Perhaps a way to break it out would be:

  1. What is a Seattlite’s day?
  2. What data is collected, how, and by whom? What models can we create to help us understand? Is there a good balance between specificity and generality?
  3. What can go wrong? (There are interesting variations in the answer based on who the data is about)
  4. What could we do about it? (The answers here vary based on who’s collecting the data.)
  5. Did we do a good job?

My main goal is to come away from the exercise with a useful model of the privacy threats to Seattleites. If we can, I’d also like to understand how well this “flipped” approach works.

[As I’ve discussed this, there’s a lot of interest in what comes out and what it means, but I don’t expect that to be the main focus of discussion on Saturday. For example,] There are also policy questions like, “as the city takes action to collect data, how does that interact with its official goal to be a welcoming city?” I suspect that the answer is ‘not very well,’ and that there’s an opportunity for collaboration here across the political spectrum. Those who want to run a ‘welcoming city’ and those who distrust government data collection can all ask how Seattle’s new privacy program will help us.

In any event, a bunch of us will be getting together at the Delridge Library this Saturday, May 13, at 1PM to discuss for about 2 hours, and anyone interested is welcome to join us. We’ll just need two forms of ID and your consent to our outrageous terms of service. (Just kidding. We do not check ID, and I simply ask that you show up with a goal of respectful collaboration, and a belief that everyone else is there with the same good intent.)

The Evolution of Apple’s Differential Privacy

Bruce Schneier comments on “Apple’s Differential Privacy:”

So while I applaud Apple for trying to improve privacy within its business models, I would like some more transparency and some more public scrutiny.

Do we know enough about what’s being done? No, and my bet is that Apple doesn’t know precisely what they’ll ship, and aren’t answering deep technical questions so that they don’t mis-speak. I know that when I was at Microsoft, details like that got adjusted as we learned from a bigger pile of real data from real customer use informed things. I saw some really interesting shifts surprisingly late in the dev cycle of various products.

I also want to challenge the way Matthew Green closes: “If Apple is going to collect significant amounts of new data from the devices that we depend on so much, we should really make sure they’re doing it right — rather than cheering them for Using Such Cool Ideas.”

But that is a false dichotomy, and would be silly even if it were not. It’s silly because we can’t be sure if they’re doing it right until after they ship it, and we can see the details. (And perhaps not even then.)

But even more important, the dichotomy is not “are they going to collect substantial data or not?” They are. The value organizations get from being able to observe their users is enormous. As product managers observe what A/B testing in their web properties means to the speed of product improvement, they want to bring that same ability to other platforms. Those that learn fastest will win, for the same reasons that first to market used to win.

Next, are they going to get it right on the first try? No. Almost guaranteed. Software, as we learned a long time ago, has bugs. As I discussed in “The Evolution of Secure Things:”

Its a matter of the pressures brought to bear on the designs of even what (we now see) as the very simplest technologies. It’s about the constant imperfection of products, and how engineering is a response to perceived imperfections. It’s about the chaotic real world from which progress emerges. In a sense, products are never perfected, but express tradeoffs between many pressures, like manufacturing techniques, available materials, and fashion in both superficial and deep ways.

Green (and Schneier) are right to be skeptical, and may even be right to be cynical. We should not lose sight of the fact that Apple is spending rare privacy engineering resources to do better than Microsoft. Near as I can tell, this is an impressive delivery on the commitment to be the company that respects your privacy, and I say that believing that there will be both bugs and design flaws in the implementation. Green has an impressive record of finding and calling Apple (and others) on such, and I’m optimistic he’ll have happy hunting.

In the meantime, we can, and should, cheer Apple for trying.

RSA: Time for some cryptographic dogfood

One of the most effective ways to improve your software is to use it early and often.  This used to be called eating your own dogfood, which is far more evocative than the alternatives. The key is that you use the software you’re building. If it doesn’t taste good to you, it’s probably not customer-ready.  And so this week at RSA, I think more people should be eating the security community’s cryptographic dogfood.

As I evangelize the use of crypto to meet up at RSA, I’ve encountered many problems, such as choice of tool, availability of tool across a set of mobile platforms, cost of entry, etc.  Each of these is predictable, but with dogfooding — forcing myself to ask everyone why they want to use an easily wiretapped protocol — the issues stand out, and the companies that will be successful will start thinking about ways to overcome them.

So this week, as you prep for RSA, spend a few minutes to get some encrypted communications tool. The worst that can happen is you’re no more secure than you were before you read this post.

What Price Privacy, Paying For Apps edition

There’s a new study on what people would pay for privacy in apps. As reported by Techflash:

A study by two University of Colorado Boulder economists, Scott Savage and Donald Waldman, found the average user would pay varying amounts for different kinds of privacy: $4.05 to conceal contact lists, $2.28 to keep their browser history private, $2.12 to eliminate advertising on apps, $1.19 to conceal personal locations, $1.75 to conceal the phone’s ID number and $3.58 to conceal the contents of text messages.

Those numbers seem small, but they’re in the context of app pricing, which is generally a few bucks. If those numbers combine linearly, people being willing to pay up to $10 more for a private version is a very high valuation. (Of course, the numbers will combine in ways that are not strictly rational. Consumers satisfice.

A quick skim of the article leads me to think that they didn’t estimate app maker benefit from these privacy changes. How much does a consumer contact list go for? (And how does that compare to the fines for improperly revealing it?) How much does an app maker make per person whose eyeballs they sell to show ads?

Workshop on the Economics of Information Security

The next Workshop on the Economics of Information Security will be held June 11-12 at Georgetown University, Washington, D.C. Many of the papers look fascinating, including “On the Viability of Using Liability to Incentivise Internet Security”, “A Behavioral Investigation of the FlipIt Game”, and “Are They Actually Any Different? Comparing 3,422 Financial Institutions’ Privacy Practices.”

Not to mention “How Bad Is It? – A Branching Activity Model to Estimate the Impact of Information Security Breaches” previously discussed here.

A Quintet of Facebook Privacy Stories

It’s common to hear that Facebook use means that privacy is over, or no longer matters. I think that perception is deeply wrong. It’s based in the superficial notion that people making different or perhaps surprising privacy tradeoffs are never aware of what they’re doing, or that they have no regrets.

Some recent stories that I think come together to tell a meta-story of privacy:

  • Steven Levy tweeted: “What surprised me most in my Zuck interview: he says the thing most on rise is ‘sharing with smaller groups.'” (Tweet edited from 140-speak). I think that sharing with smaller groups is a pretty clear expression that privacy matters to Facebook users, and that as Facebook becomes more a part of people’s lives, the way they use it will continue to mature. For example, it turns out:
  • 71% of Facebook Users Engage in ‘Self-Censorship’” did a study of people typing into the Facebook status box, and not hitting post. In part this may be because people are ‘internalizing the policeman’ that Facebook imposes:
  • Facebook’s Online Speech Rules Keep Users On A Tight Leash.” This isn’t directly a privacy story, but one important facet of privacy is our ability to explore unpopular ideas. If our ability to do so in the forum in which people talk to each other is inhibited by private contract and opaque rules, then our ability to explore and grow in the privacy which Facebook affords to conversations is inhibited.
  • Om Malik: “Why Facebook Home bothers me: It destroys any notion of privacy” An interesting perspective, but Facebook users still care about privacy, but will have trouble articulating how or taking action to preserve the values of privacy they care about.

On Cookie Blocking

It would not be surprising if an article like “Firefox Cookie-Block Is The First Step Toward A Better Tomorrow” was written by a privacy advocate. And it may well have been. But this privacy advocate is also a former chairman of the Internet Advertising Bureau. (For their current position, see “Randall Rothenberg’s Statement Opposing Mozilla’s Intention to Block Third-Party Cookies.”

But quoting from “the first step:”

First, the current promise of ultra-targeted audiences delivered in massively efficient ways is proving to be one of the more empty statements in recent memory. Every day more data shows that what is supposed to be happening is far from reality. Ad impressions are not actually in view, targeting data is, on average, 50% inaccurate by some measures (even for characteristics like gender) and, all too often, the use of inferred targeting while solving for low-cost clicks produces cancerous placements for the marketer. At the end of the day, the three most important players in the ecosystem – the visitor, the content creator and the marketer – are all severely impaired, or even negatively impacted, by these practices.

It’s a quick read, and fascinating when you consider the source.