Automotive Privacy

[Update: clarified a sentence about whose privacy is touched, and where.]

I had missed the story “Big Brother on wheels: Why your car company may know more about you than your spouse.” There are surprising details, including that you might be able to shut it off, and the phrase “If a customer declines, we do not collect any data from the vehicle.” I do wonder how a customer can decline — does it involve not buying a GM car?

When we did a privacy threat model at the Seattle Privacy Coalition, we found these issues. We also were surprised that the defense, taking a car driven by someone else (a taxi, or a Lyft/Uber) makes such a big difference, leaving the owner of the car associated with the trip via license plate, toll beacons, tire pressure monitors, traffic sensors, maps, and other technologies with tracking implications. And the passenger is associated if payment is by card, or the ride is booked via an app. splits/confuses the difference. It may also be that driving for Lyft/Uber acts as a defense, by classifying a car as a carshare, but it seems pretty easy to see through that to where the car is parked (especially overnight) and to repeated trips to dis-ambiguate between paid and personal rides.

Designing for Good Social Systems

There’s a long story in the New York Times, “Where Countries Are Tinderboxes and Facebook Is a Match:”

A reconstruction of Sri Lanka’s descent into violence, based on interviews with officials, victims and ordinary users caught up in online anger, found that Facebook’s newsfeed played a central role in nearly every step from rumor to killing. Facebook officials, they say, ignored repeated warnings of the potential for violence, resisting pressure to hire moderators or establish emergency points of contact.

I’ve written previously about the drama triangle, how social media drives engagement through dopamine and hatred, and a tool to help you breathe through such feelings.

These social media tools are dangerous, not just to our mental health, but to the health of our societies. They are actively being used to fragment, radicalize and undermine legitimacy. The techniques to drive outrage are developed and deployed at rates that are nearly impossible for normal people to understand or engage with. We, and these platforms, need to learn to create tools that preserve the good things we get from social media, while inhibiting the bad. And in that sense, I’m excited to read about “20 Projects Will Address The Spread Of Misinformation Through Knight Prototype Fund.”

We can usefully think of this as a type of threat modeling.

  • What are we working on? Social technology.
  • What can go wrong? Many things, including threats, defamation, and the spread of fake news. Each new system context brings with it new types of fail. We have to extend our existing models and create new ones to address those.
  • What are we going to do about it? The Knight prototypes are an interesting exploration of possible answers.
  • Did we do a good job? Not yet.

These emergent properties of the systems are not inherent. Different systems have different problems, and that means we can discover how design choices interact with these downsides. I would love to hear about other useful efforts to understand and respond to these emergent types of threats. How do we characterize the attacks? How do we think about defenses? What’s worked to minimize the attacks or their impacts on other systems? What “obvious” defenses, such as “real names,” tend to fail?

Image: Washington Post

Guns, Homicides and Data

I came across a fascinating post at Jon Udell’s blog, “Homicide rates in context ,” which starts out with this graph of 2007 data:

A map showing gun ownership and homicide rates, and which look very different

Jon’s post says more than I care to on this subject right now, and points out questions worth asking.

As I said in my post on “Thoughts on the Tragedies of December 14th,” “those who say that easy availability of guns drives murder rates must do better than simply cherry picking data.”

I’m not sure I believe that the “more guns, less crime” claim made by A.W.R. Hawkins claim is as causative as it sounds, but the map presents a real challenge to simplistic responses to tragic gun violence.

Emergent Effects of Restrictions on Teenage Drivers

For more than a decade, California and other states have kept their newest teen drivers on a tight leash, restricting the hours when they can get behind the wheel and whom they can bring along as passengers. Public officials were confident that their get-tough policies were saving lives.

Now, though, a nationwide analysis of crash data suggests that the restrictions may have backfired: While the number of fatal crashes among 16- and 17-year-old drivers has fallen, deadly accidents among 18-to-19-year-olds have risen by an almost equal amount. In effect, experts say, the programs that dole out driving privileges in stages, however well-intentioned, have merely shifted the ranks of inexperienced drivers from younger to older teens.

“The unintended consequences of these laws have not been well-examined,” said Mike Males, a senior researcher at the Center on Juvenile and Criminal Justice in San Francisco, who was not involved in the study, published in Wednesday’s edition of the Journal of the American Medical Assn. “It’s a pretty compelling study.” (“Teen driver restrictions a mixed bag“)

As Princess Leia once said, “The more you tighten your grip, the more teenagers will slip through your fingers.”

Nymwars: Thoughts on Google+

There’s something important happening around Google+. It’s the start of a rebellion against the idea of “government authorized names.” (A lot of folks foolishly allow the other side to name this as “real names,” but a real name is a name someone calls you.)

Let’s start with “Why Facebook and Google’s Concept of ‘Real Names’ Is Revolutionary” by “Alex Madrigal.” He explains why the idea is not only not natural, but revolutionary. Then move on to “Why it Matters: Google+ and Diversity, part 2” by “Jon Pincus.” From there, understand see “danah boyd” explain that ““Real Names” Policies Are an Abuse of Power . One natural reaction is ““If you don’t like it, don’t use it. It’s that simple.” ORLY?” as “Alice Marwick” explains, it’s really not that simple. That’s why people like “Skud” are continuing to fight, as shown in “Skud vs. Google+, round two.”

What’s the outcome? Egypt, Yemen and Saudi Arabia require real names. “South Korea is abandoning its “real name” internet policy

So how do we get there? “Identity Woman” suggested that we have a ““Million” Persona March on Google ,” but she’s now suspended. “Skud” posted “Nymwars strategy.”

This is important stuff for how we shape the future of the internet, and how the future of the internet shapes our lives. Even if you only use one name, you should get involved. Get involved by understanding why names matter, and get involved by calling people what they want to be called, not what Google wants to call them.

AT&T, Voice Encryption and Trust

Yesterday, AT&T announced an Encrypted Mobile Voice. As CNet summarizes:

AT&T is using One Vault Voice to provide users with an application to control their security. The app integrates into a device’s address book and “standard operation” to give users the option to encrypt any call. AT&T said that when encryption is used, the call is protected from end to end.

AT&T Encrypted Mobile Voice is designed specifically for major companies, government agencies, and law enforcement organizations. An AT&T spokesperson said it is not available to consumers. The technology is available to users running BlackBerry devices or Windows Mobile smartphones, and it works in 190 countries.

Organizations interested in deploying Encrypted Mobile Voice will need to pay an additional fee to do so. AT&T said that cost depends on the size of the deployment. (“AT&T improves service security with encryption

Jake Appelbaum and Chris Soghoian expressed skepticism. (“From the company that brought you NSA wire tapping, they thought you’d also like….” and “If you trust AT&T’s new voice encryption service, you are a fool.“)

What’s funny (sad) about this is that there are a number of software encrypted voice systems available. They include RedPhone, CryptoPhone and zFone. Some of these even work on pocket sized computers with integrated radios. But Apple and AT&T won’t let you install alternate voice applications.

A lot of people claim that these restrictions on what you can do with your device just don’t matter very much. That you can really get everything you need. But here’s a clear example of why that isn’t so. Voice encryption is a special app that you have to get permission to run.

Now, maybe you don’t care. You’re “not doing anything wrong.” Well, Hoder wasn’t doing anything wrong when he went to Israel and blogged about it in Farsi. But he’s serving 20 years in jail in Iran.

Now is the time we should be building security in. Systems that prevent you from doing so, or systems that reset themselves to some manufacturer designated default are simply untrustworthy. We should demand better, more trustworthy products or build them ourselves.

[Added: I’d meant to include a comment about Adam Thierer’s comment “The more interesting question here is how “closed” is the iPhone really?” I think the answer is, in part, here. There’s a function, voice privacy, for which AT&T and three other companies think is marketable. And it doesn’t exist on the iPhone OS, which is the 2nd most prevalent phone platform out there.]

[Update 2: Robert and Rob rob me of some of my argument by pointing out that AT&T now allows you to install voice apps, but none of the encrypted voice apps that I’d consider trustworthy are available. (I exlude Skype and their proprietary & secret designs from trustworthy; it’s probably better than no crypto until you trust it, then it’s probably not good enough to really protect you.) Maybe this is a result of the arbitrary rejections by the Apple app store, but when I look for zfone, redphone or cryptophone, I see a fast dial app and some games. When I search for crypto, it’s all password managers. So while I’m no longer sure of the reason, the result remains. The iPhone is missing trustworthy voice crypto, despite the market.]

Transparency, India, Voting Machines

India’s EVMs are Vulnerable to Fraud. And for pointing that out, Hari Prasad has been arrested by the police in India, who wanted to threaten and intimidate him question him about where he got the machine that he studied. That’s a shame. The correct response is to fund Hari Prasad’s work, not use the police to silence him.

I could write quite a bit about how science and security progress through open debate; about how no one likes to be wrong, but by admitting mistakes, we can improve, or the terrifying power of the state and the need to restrain it.

Rather I’ll just comment that arrogant abuses of power like this serve to de-legitimize the state and undermines the moral basis of claims to a monopoly of violence. When people can’t protest with speech and demonstrations of fact, they’ll continue to pursue their interests by other means with higher stakes.

Why we need strong oversight & transparency

[The ACLU has a new] report, Policing Free Speech: Police Surveillance and Obstruction of First Amendment-Protected Activity (.pdf), surveys news accounts and studies of questionable snooping and arrests in 33 states and the District of Columbia over the past decade.

The survey provides an outline of, and links to, dozens of examples of Cold War-era snooping in the modern age.

“Our review of these practices has found that Americans have been put under surveillance or harassed by the police just for deciding to organize, march, protest, espouse unusual viewpoints and engage in normal, innocuous behaviors such as writing notes or taking photographs in public,” Michael German, an ACLU attorney and former Federal Bureau of Investigation agent, said in a statement.

Via Wired. Unfortunately, (as Declan McCullagh reports) “Police push to continue warrantless cell tracking,” and a host of other surveillance technologies which we have yet to grapple with.

For example, it seems FourSquare had an interesting failure of threat modeling, where they failed to grok the information disclosure aspects of some of their pages. See “White Hat Uses Foursquare Privacy Hole to Capture 875K Check-Ins.” To the extent that surveillance is opt-in, it is far less worrisome than when it’s built into the infrastructure, or forced on consumers via contract revisions.

Makeup Patterns to hide from face detection

Adam Harvey is investigating responses to the growing ubiquity of surveillance cameras with facial recognition capabilities.

face-detection.jpg

He writes:

My thesis at ITP, is to research and develop privacy enhancing counter technology. The aim of my thesis is not to aid criminals, but since artists sometimes look like criminals and vice versa, it is important to protect individual privacy for everyone.

[…]

What will these forms look like and how well will they integrate into our cultural expectations of body decoration while still being able to function as face detection blocking devices? How can hats, sunglasses, makeup, earrings, necklaces or other accessories be modified to become functional and decorative? These are the topics that I’ll be exploring in thesis on CV Dazzle.

Very interesting stuff in Adam Harvey’s CV Dazzle Makeup blog posts. I think everyone will be wearing them in the future.