Conference Etiquette: What’s New?

So Bill Brenner has a great article on “How to survive security conferences: 4 tips for the socially anxious
.” I’d like to stand by my 2010 guide to “Black Hat Best Practices,” and augment it with something new: a word on etiquette.

Etiquette is not about what fork you use (start from the outside, work in), or an excuse to make you uncomfortable because you forgot to call the Duke “Your Grace.” It’s a system of tools to help otherwise awkward social interactions go more smoothly.

We all meet a lot of people at these conferences, and there’s some truth behind the stereotype that people in technology are bad at “the people skills.” Sometimes, when we see someone, there will be recognition, but the name and full context doesn’t come rushing back. That’s an awkward moment, and it’s worth thinking about the etiquette involved.

When you know you’ve met someone and can’t recall the details, it’s rude to say “remind me who you are,” and so people will do a bunch of things to politely encourage reminders. For example, they’ll say “what’s new” or “what have you been working on lately?” Answers like “nothing new” or “same old stuff” are not helpful to the person who asked. This is an invitation to talk about your work. Even if you haven’t done anything new that’s ready to talk about, you can say something like “I’m still exploring the implications of the work I did on X” or “I’ve wrapped up my project on Y, and I’m looking for a new thing to go frozzle.” If all your work is secret, you can say “Oh, still at DoD, doing stuff for Uncle Sam.”

Whatever your answer will be, it should include something to help people remember who you are.

Why not give it a try this RSA?

BTW, you can get the best list of RSA parties where you can yell your answers to such questions at “RSA Parties Calendar.”

Boyd Video: Patterns of Conflict

John Boyd’s ideas have had a deep impact on the world. He created the concept of the OODA Loop, and talked about the importance of speed (“getting inside your opponent’s loop”) and orientation, and how we determine what’s important.

A lot of people who know about the work of John Boyd also know that he rarely took the time to write. His work was constantly evolving, and for many years, the work existed as scanned photocopies of acetate presentation slides.

In 2005, Robert Coram published a book (which I reviewed here and in that review, I said:

His writings are there to support a presentation; many of them don’t stand well on their own. Other writers present his ideas better than he did. But they don’t think with the intensity, creativity, or rigor that he brought to his work.

I wasn’t aware that there was video of him presenting, but Jasonmbro has uploaded approximately 5 hours of Boyd presenting his Patterns of Conflict briefing. The audio is not great, but it’s not unusable. There’s an easy to read version of that slide collection here. (Those slides are a little later than the video, and so may not line up perfectly.)

The New Cyber Agency Will Likely Cyber Fail

The Washington Post reports that there will be a “New agency to sniff out threats in cyberspace.” This is my first analysis of what’s been made public.

Details are not fully released, but there are some obvious problems, which include:

  • “The quality of the threat analysis will depend on a steady stream of data from the private sector” which continues to not want to send data to the Feds.
  • The agency is based in the Office of the Director of National Intelligence. The world outside the US is concerned that the US spies on them, which means that the new center will get minimal cooperation from any company which does business outside the US.
  • There will be privacy concerns about US citizen information, much like there was with the NCTC. For example, here.
  • The agency is modeled on the National Counter Terrorism Center. See “Law Enforcement Agencies Lack Directives to Assist Foreign Nations to Identify, Disrupt, and Prosecute Terrorists (2007)“. A new agency certainly has upwards of three years to get rolling, because that will totally help.
  • The President continues to ask the wrong questions of the wrong people. (“President Obama wanted to know the details. What was the impact? Who was behind it? Monaco called meetings of the key agencies involved in the investigation, including the FBI, the NSA and the CIA.” But not the private sector investigators who were analyzing the hard drives and the logs?)

It’s all well and good to stab, but perhaps more useful would be some positive contributions. I have been doing my best to make those contributions.

I sent a letter to the folks back in 2009, asking for more transparency. Similarly, I sent an open letter to the new cyber-czar.

The suggestions there have not been acted apon. Rather than re-iterate them, I believe there are human reasons why that’s the case, and so in 2013, asked the Royal Society to look into reasons that calls for an NTSB-like function have failed as part of their research vision for the UK.

Cyber continues to suck. Maybe it’s time to try openness, rather than a new secret agency secretly doing analysis of who’s behind the attacks, rather than why they succeed, or why our defenses aren’t working. If we can’t get to openness, and apparently we cannot, we should look at the reasons why. We should inventory them, including shame, liability fears, customers fleeing and assess their accuracy and predictive value. We should invest in a research program that helps us understand them and address them so we can get to a proper investigative approach to why cyber is failing, and only then will we be able to do anything about it.

Until then, keep moving those deck chairs.

What CSOs can Learn from Pete Carroll

Pete Carroll

If you listen to the security echo chamber, after an embarrassing failure like a data breach, you lose your job, right?

Let’s look at Seahawks Coach Pete Carroll, who made what the home town paper called the “Worst Play Call Ever.” With less than a minute to go in the Superbowl, and the game hanging in the balance, the Seahawks passed. It was intercepted, and…game over.

Breach Lose the Super-Bowl
Publicity News stories, letters Half of America watches the game
Headline Another data breach Worst Call Play Ever
Cost $187 per record! Tens of millions in sponsorship
Public response Guessing, not analysis Monday morning quarterbacking*
Outcome CSO loses job Pete Caroll remains employed

So what can the CSO learn from Pete Carroll?

First and foremost, have back to back winning seasons. Since you don’t have seasons, you’ll need to do something else that builds executive confidence in your decision making. (Nothing builds confidence like success.)

Second, you don’t need perfect success, you need successful prediction and follow-through. Gunnar Peterson has a great post about the security VP winning battles. As you start winning battles, you also need to predict what will happen. “My team will find 5-10 really important issues, and fixing them pre-ship will save us a mountain of technical debt and emergency events.” Pete Carroll had that—a system that worked.

Executives know that stuff happens. The best laid plans…no plan ever survives contact with the enemy. But if you routinely say things like “one vuln, and it’s over, we’re pwned!” or “a breach here could sink the company!” you lose any credibility you might have. Real execs expect problems to materialize.

Lastly, after what had to be an excruciating call, he took the conversation to next year, to building the team’s confidence, and not dwelling on the past.

What Pete Carroll has is a record of delivering what executives wanted, rather than delivering excuses, hyperbole, or jargon. Do you have that record?

* Admittedly, it started about 5 seconds after the play, and come on, how could I pass that up? (Ahem)
† I’m aware of the gotcha risk here. I wrote this the day after Sony Pictures Chairman Amy Pascal was shuffled off to a new studio.

An Infosec lesson from the "Worst Play Call Ever"

It didn’t take long for the Seahawk’s game-losing pass to get a label.

But as Ed Felten explains, there’s actually some logic to it, and one of his commenters (Chris) points out that Marshawn Lynch scored in only one of his 5 runs from the one yard line this season. So, perhaps in a game in which the Patriots had no interceptions, it was worth the extra play before the clock ran out.

We can all see the outcome, and we judge, post-facto, the decision on that.

Worst play call ever

In security, we almost never see an outcome so closely tied to a decision. As Jay Jacobs has pointed out, we live in a wicked environment. Unfortunately, we’re quick to snap to judgement when we see a bad outcome. That makes learning harder. Also, we don’t usually get a chance to see the logic behind a play and assess it.

If only we had a way to shorten those feedback loops, then maybe we could assess what the worst play call in infosec might be.

And in fact, despite my use of snarky linkage, I don’t think we know enough to judge Sony or ChoicePoint. The decisions made by Spaltro at Sony are not unusual. We hear them all the time in security. The outcome at Sony is highly visible, but is it the norm, or is it an outlier? I don’t think we know enough to know the answer.

Hindsight is 20/20 in football. It’s easy to focus in on a single decision. But the lesson from Moneyball, and the lesson from Pete Carroll is Really, with no second thoughts or hesitation in that at all.” He has a system, and it got the Seahawks to the very final seconds of the game. And then.

One day, we’ll be able to tell management “our systems worked, and we hit really bad luck.”

[Please keep comments civil, like you always do here.]

The Unexpected Meanings of Facebook Privacy Disclaimers

Paul Gowder has an interesting post over at Prawfblog, “In Defense of Facebook Copyright Disclaimer Status Updates (!!!).” He presents the facts:

…People then decide that, hey, goose, gander, if Facebook can unilaterally change the terms of our agreement by presenting new ones where, theoretically, a user might see them, then a user can unilaterally change the terms of our agreement by presenting new ones where, theoretically, some responsible party in Facebook might see them. Accordingly, they post Facebook statuses declaring that they reserve all kinds of rights in the content they post to Facebook, and expressly denying that Facebook acquires any rights to that content by virtue of that posting.

Before commenting on his analysis, which is worth reading in full, there’s an important takeaway, which is that even on Facebook, and even with Facebook’s investment in making their privacy controls more usable, people want more privacy while they’re using Facebook. Is that everyone? No, but it’s enough for the phenomenon of people posting these notices to get noticed.

His analysis instead goes to what we can learn about how people see the law:

To the contrary, I think the Facebook status-updaters reflect both cause for hope and cause for worry about our legal system. The cause for worry is that the system does seem to present itself as magic words. The Facebook status updates, like the protests of the sovereign citizens (but much more mainstream), seem to me to reflect a serious alienation of the public from the law, in which the law isn’t rational, or a reflection of our collective values and ideas about how we ought to treat one another and organize our civic life. Instead, it’s weaponized ritual, a set of pieces of magic paper or bits on a computer screen, administered by a captured priesthood, which the powerful can use to exercise that power over others. With mere words, unhinged from any semblance of autonomy or agreement, Facebook can (the status-updaters perceive) whisk away your property and your private information. This is of a kind with the sort of alienation that I worried about over the last few posts, but in the civil rather than the criminal context: the perception that the law is something done to one, rather than something one does with others as an autonomous agent as well as a democratic citizen. Whether this appears in the form of one-sided boilerplate contracts or petty police harassment, it’s still potentially alienating, and, for that reason, troubling.

This is spot-on. Let me extend it. These “weaponized rituals” are not just at the level of the law. Our institutions are developing anti-bodies to unscripted or difficult to categorize human participation, because engaging with human participation is expensive to deliver and inconvenient to the organization. We see this in the increasingly ritualized engagement with the courts. Despite regular attempts to make courts operate in plain English, it becomes a headline when “Prisoner wins Supreme Court case after submitting handwritten petition.” (Yes, the guy’s apparently otherwise a jerk, serving a life sentence.) Comments to government agencies are now expected to follow a form (and regular commenters learn to follow it, lest their comments engage the organizational anti-bodies on procedural grounds). When John Oliver suggested writing to the FCC, its systems crashed and they had to extend the deadline. Submitting Freedom of Information requests to governments, originally meant to increase transparency and engagement, has become so scripted that there are web sites to track your requests and departmental failures to comply with the statuatory timelines. We have come to accept that our legislators and regulators are looking out for themselves, and no longer ask them to focus on societal good. We are pleasantly surprised when they pay more than lip service to anything beyond their agency’s remit. In such a world, is it any surprise that most people don’t bother to vote?

Such problems are not limited to the law. We no longer talk to the man in the gray flannel suit, we talk to someone reading from a script he wrote. Our interactions with organizations are fenceposted by vague references to “policy.” Telephone script-readers are so irksome to deal with that we all put off making calls, because we know that even asking for a supervisor barely helps. (This underlies why rage-tweeting can actually help cut red tape; it summons a different department to try to work your way through a problem created by intra-organizational shuffling of costs.) Sometimes the references to policy are not vague, but precise, and the precision itself is a cost-shifting ritual. By demanding a form that’s convenient to itself, an organization can simultaneously call for engagement while making that engagement expensive and frustrating. When engaging requires understanding the the system as well as those who are immersed in it, engagement is discouraged. We can see this at Wikipedia, for example, discussed in a blog post like “The Closed, Unfriendly World of Wikipedia.” Wikipedia has evolved a system for managing disputes, and that system is ritualized. Danny Sullivan doesn’t understand why they want him to jump through hoops and express himself in the way that makes it easy for them to process.

Such ritualized forms of engagement display commitment to the organization. This can inform our understanding of how social engineers work. Much of their success at impersonating employees comes from being fluid in the use of a victim’s jargon, and in the 90s, much of what was published in 2600 was lists of Ma Bell’s acronyms or descriptions of operating procedures. People believe that only an employee would bother to learn such things, and so learning such things acts as an authenticator in ways that infuriate technical system designers.

What Gowder calls rituals can also be viewed as protocols (or protocol messages). They are the formalized, algorithm friendly, state-machine altering messages, and thus we’ll see more of them.

Such growth makes systems brittle, as they focus on processing those messages and not others. Brittle systems break in chaotic and often ugly ways.

So let me leave this with a question: how can we design systems which scale without becoming brittle, and also allow for empathy?

Security 101: Show Your List!

Lately I’ve noted a lot of people quoted in the media after breaches saying “X was Security 101. I can’t believe they didn’t do X!” For example, “I can’t believe that LinkedIn wasn’t salting passwords! That’s security 101!”

Now, I’m unsure if that’s “security 101” or not. I think security 101 for passwords is “don’t store them in plaintext”, or “don’t store them with a crypto algorithm you designed”. Ten years ago, it would have included salting, but with the speed of GPU crackers, maybe it doesn’t anymore. A good library would probably still include it. Maybe LinkedIn was spending more on preventing XSS or SQL injection, and that pushed password storage off their list. Maybe that’s right, maybe it’s wrong. To tell you the truth, I don’t want to argue about it.

What I want to argue about is the backwards looking nature of these statements. I want to argue because I did some searching, and not one of those folks I searched for has committed to a list of security 101, or what are the “simple controls” every business should have.

This is important because otherwise, hindsight is 20/20. It’s easy to say in hindsight that an organization should have done A or B or C. It’s harder to offer up a complete list in advance, and harder yet to justify the budget required to deploy and operate it.

So I’m going to make three requests for 2015:

  • If you’re an expert (or even play one on the internet), and if you want to say “X is Security 101,” please publish your full list of what’s “101.”
  • If you’re a reporter and someone tells you “X is security 101” please ask them for their list.
  • Finally, if you’re someone who wants to see security improve, and you hear claims about “101”, please ask for the list.

Oh, and since it’s sauce for the gander, here’s my list for individuals:

  • Stay up to date–get most of your machines on the latest revisions of software and get patches for security issues installed, especially in your browser and AV software.
  • Use a firewall that blocks most inbound traffic.
  • Ensure you have a working backup of your data.

(There are complex arguments about AV software, and a lack of agreements about how to effectively test it. Do you need it? Will it block the wildlist? There’s nuance, but that nuance doesn’t play into a 101 list. I won’t be telling people not to use AV software.)

*By “lately,” I meant in 2012, when I wrote this, right after the Linkedin breach. But I recently discovered that I hadn’t posted.

[Update: I’m happy to see Ira Winkler and Araceli Treu Gomes took up the idea in “The Irari rules for declaring a cyberattack ‘sophisticated’.” Good for them!]

IOS Subject Key Identifier?

I’m having a problem where the “key identifier” displayed on my ios device does not match the key fingerprint on my server. In particular, I run:

% openssl x509 -in keyfile.pem -fingerprint -sha1

and I get a 20 byte hash. I also have a 20 byte hash in my phone, but it is not that hash value. I am left wondering if this is a crypto usability fail, or an attack.

Should I expect the output of that openssl invocation to match certificate details on IOS, or is that a different hash? What options to openssl should produce the result I see on my phone?

[update: it also does not match the output or a trivial subset of the output of

% openssl x509 -in keyfile.pem -fingerprint -sha256

% openssl x509 -in keyfile.pem -fingerprint -sha512


[Update 2: iOS displays the “X509v3 Subject Key Identifier”, and you can ask openssl for that via -text, eg, openssl x509 -in pubkey.pem -text. Thanks to Ryan Sleevi for pointing me down that path.]