The Unexpected Meanings of Facebook Privacy Disclaimers

Paul Gowder has an interesting post over at Prawfblog, “In Defense of Facebook Copyright Disclaimer Status Updates (!!!).” He presents the facts:

…People then decide that, hey, goose, gander, if Facebook can unilaterally change the terms of our agreement by presenting new ones where, theoretically, a user might see them, then a user can unilaterally change the terms of our agreement by presenting new ones where, theoretically, some responsible party in Facebook might see them. Accordingly, they post Facebook statuses declaring that they reserve all kinds of rights in the content they post to Facebook, and expressly denying that Facebook acquires any rights to that content by virtue of that posting.

Before commenting on his analysis, which is worth reading in full, there’s an important takeaway, which is that even on Facebook, and even with Facebook’s investment in making their privacy controls more usable, people want more privacy while they’re using Facebook. Is that everyone? No, but it’s enough for the phenomenon of people posting these notices to get noticed.

His analysis instead goes to what we can learn about how people see the law:

To the contrary, I think the Facebook status-updaters reflect both cause for hope and cause for worry about our legal system. The cause for worry is that the system does seem to present itself as magic words. The Facebook status updates, like the protests of the sovereign citizens (but much more mainstream), seem to me to reflect a serious alienation of the public from the law, in which the law isn’t rational, or a reflection of our collective values and ideas about how we ought to treat one another and organize our civic life. Instead, it’s weaponized ritual, a set of pieces of magic paper or bits on a computer screen, administered by a captured priesthood, which the powerful can use to exercise that power over others. With mere words, unhinged from any semblance of autonomy or agreement, Facebook can (the status-updaters perceive) whisk away your property and your private information. This is of a kind with the sort of alienation that I worried about over the last few posts, but in the civil rather than the criminal context: the perception that the law is something done to one, rather than something one does with others as an autonomous agent as well as a democratic citizen. Whether this appears in the form of one-sided boilerplate contracts or petty police harassment, it’s still potentially alienating, and, for that reason, troubling.

This is spot-on. Let me extend it. These “weaponized rituals” are not just at the level of the law. Our institutions are developing anti-bodies to unscripted or difficult to categorize human participation, because engaging with human participation is expensive to deliver and inconvenient to the organization. We see this in the increasingly ritualized engagement with the courts. Despite regular attempts to make courts operate in plain English, it becomes a headline when “Prisoner wins Supreme Court case after submitting handwritten petition.” (Yes, the guy’s apparently otherwise a jerk, serving a life sentence.) Comments to government agencies are now expected to follow a form (and regular commenters learn to follow it, lest their comments engage the organizational anti-bodies on procedural grounds). When John Oliver suggested writing to the FCC, its systems crashed and they had to extend the deadline. Submitting Freedom of Information requests to governments, originally meant to increase transparency and engagement, has become so scripted that there are web sites to track your requests and departmental failures to comply with the statuatory timelines. We have come to accept that our legislators and regulators are looking out for themselves, and no longer ask them to focus on societal good. We are pleasantly surprised when they pay more than lip service to anything beyond their agency’s remit. In such a world, is it any surprise that most people don’t bother to vote?

Such problems are not limited to the law. We no longer talk to the man in the gray flannel suit, we talk to someone reading from a script he wrote. Our interactions with organizations are fenceposted by vague references to “policy.” Telephone script-readers are so irksome to deal with that we all put off making calls, because we know that even asking for a supervisor barely helps. (This underlies why rage-tweeting can actually help cut red tape; it summons a different department to try to work your way through a problem created by intra-organizational shuffling of costs.) Sometimes the references to policy are not vague, but precise, and the precision itself is a cost-shifting ritual. By demanding a form that’s convenient to itself, an organization can simultaneously call for engagement while making that engagement expensive and frustrating. When engaging requires understanding the the system as well as those who are immersed in it, engagement is discouraged. We can see this at Wikipedia, for example, discussed in a blog post like “The Closed, Unfriendly World of Wikipedia.” Wikipedia has evolved a system for managing disputes, and that system is ritualized. Danny Sullivan doesn’t understand why they want him to jump through hoops and express himself in the way that makes it easy for them to process.

Such ritualized forms of engagement display commitment to the organization. This can inform our understanding of how social engineers work. Much of their success at impersonating employees comes from being fluid in the use of a victim’s jargon, and in the 90s, much of what was published in 2600 was lists of Ma Bell’s acronyms or descriptions of operating procedures. People believe that only an employee would bother to learn such things, and so learning such things acts as an authenticator in ways that infuriate technical system designers.

What Gowder calls rituals can also be viewed as protocols (or protocol messages). They are the formalized, algorithm friendly, state-machine altering messages, and thus we’ll see more of them.

Such growth makes systems brittle, as they focus on processing those messages and not others. Brittle systems break in chaotic and often ugly ways.

So let me leave this with a question: how can we design systems which scale without becoming brittle, and also allow for empathy?

Security 101: Show Your List!

Lately I’ve noted a lot of people quoted in the media after breaches saying “X was Security 101. I can’t believe they didn’t do X!” For example, “I can’t believe that LinkedIn wasn’t salting passwords! That’s security 101!”

Now, I’m unsure if that’s “security 101” or not. I think security 101 for passwords is “don’t store them in plaintext”, or “don’t store them with a crypto algorithm you designed”. Ten years ago, it would have included salting, but with the speed of GPU crackers, maybe it doesn’t anymore. A good library would probably still include it. Maybe LinkedIn was spending more on preventing XSS or SQL injection, and that pushed password storage off their list. Maybe that’s right, maybe it’s wrong. To tell you the truth, I don’t want to argue about it.

What I want to argue about is the backwards looking nature of these statements. I want to argue because I did some searching, and not one of those folks I searched for has committed to a list of security 101, or what are the “simple controls” every business should have.

This is important because otherwise, hindsight is 20/20. It’s easy to say in hindsight that an organization should have done A or B or C. It’s harder to offer up a complete list in advance, and harder yet to justify the budget required to deploy and operate it.

So I’m going to make three requests for 2015:

  • If you’re an expert (or even play one on the internet), and if you want to say “X is Security 101,” please publish your full list of what’s “101.”
  • If you’re a reporter and someone tells you “X is security 101” please ask them for their list.
  • Finally, if you’re someone who wants to see security improve, and you hear claims about “101”, please ask for the list.

Oh, and since it’s sauce for the gander, here’s my list for individuals:

  • Stay up to date–get most of your machines on the latest revisions of software and get patches for security issues installed, especially in your browser and AV software.
  • Use a firewall that blocks most inbound traffic.
  • Ensure you have a working backup of your data.

(There are complex arguments about AV software, and a lack of agreements about how to effectively test it. Do you need it? Will it block the wildlist? There’s nuance, but that nuance doesn’t play into a 101 list. I won’t be telling people not to use AV software.)

*By “lately,” I meant in 2012, when I wrote this, right after the Linkedin breach. But I recently discovered that I hadn’t posted.

[Update: I’m happy to see Ira Winkler and Araceli Treu Gomes took up the idea in “The Irari rules for declaring a cyberattack ‘sophisticated’.” Good for them!]

IOS Subject Key Identifier?

I’m having a problem where the “key identifier” displayed on my ios device does not match the key fingerprint on my server. In particular, I run:


% openssl x509 -in keyfile.pem -fingerprint -sha1

and I get a 20 byte hash. I also have a 20 byte hash in my phone, but it is not that hash value. I am left wondering if this is a crypto usability fail, or an attack.

Should I expect the output of that openssl invocation to match certificate details on IOS, or is that a different hash? What options to openssl should produce the result I see on my phone?

[update: it also does not match the output or a trivial subset of the output of

% openssl x509 -in keyfile.pem -fingerprint -sha256
(or)

% openssl x509 -in keyfile.pem -fingerprint -sha512

]

[Update 2: iOS displays the “X509v3 Subject Key Identifier”, and you can ask openssl for that via -text, eg, openssl x509 -in pubkey.pem -text. Thanks to Ryan Sleevi for pointing me down that path.]