Passwords 2016

PSN_1002_Blog_StickyNotes.JPG

I’m excited to see the call for papers for Passwords 2016.

There are a few exciting elements.

  1. First, passwords are in a category of problems that someone recently called “garbage problems.” They’re smelly, messy, and no one really wants to get their hands dirty on them.
  2. Second, they’re important. Despite their very well-known disadvantages, and failure to match any useful security model, and despite l Gates saying that we’d be done with them within the decade, they have advantages, and have been hard to displace.
  3. Third, they suffer from a common belief that everything to be said has been said.
  4. Fourth, the conference has a variety of submission types, including academic papers and hacker talks. This is important because there are many security research communities, doing related work, and not talking. Maybe the folks at passwords can add an anonymous track, for spooks and criminals willing to speak on their previously undocumented practices via skype or SnowBot? (Ideally, via the SnowBot, as PoC.)

Studying the real problems which plague us is a discipline that medicine and public health have developed. Their professions have space for everyone to talk about the real problems that they face, and there’s a clear focus on “have we really addressed this plague?”

While it’s fun, and valuable, to go down the memory corruption, crypto math, and other popular topics at security conferences, it’s nicer to see people trying to focus on a real cyber problem that hits every time we look at a system design.


Image: Mary E. Chollet, via Karen Kapsanis.

A New Way to Tie Security to Business

A woman with a chart saying that's ok, i don't know what it means either

As security professionals, sometimes the advice we get is to think about the security controls we deploy as some mix of “cloud access security brokerage” and “user and entity behavioral analytics” and “next generation endpoint protection.” We’re also supposed to “hunt”, “comply,” and ensure people have had their “awareness” raised. Or perhaps they mean “training,” but how people are expected to act post-training is often maddeningly vague, or worse, unachievable. Frankly, I have trouble making sense of it, and that’s before I read about how your new innovative revolutionary disruptive approach is easy to deploy to ensure that APT can’t get into my network to cloud my vision.

I’m making a little bit of a joke, because otherwise it’s a bit painful to talk about.

Really, we communicate badly. It hurts our ability to drive change to protect our organizations.

A CEO once explained his view of cyber. He said “security folks always jump directly into details that just aren’t important to me. It’s as if I met a financial planner and he started babbling about a mutual fund’s beta before he understood what my family needed.” It stuck with me. Executives are generally smart people with a lot on their plates, and they want us, as security leaders, to make ourselves understood.

I’ve been heads down with a small team, building a new kind of risk management software. It’s designed to improve executive communication. Our first customers are excited and finding that it’s changing the way they engage with their management teams. Right now, we’re looking for a few more forward-looking organizations that want to improve their security, allocate their resources better and link what they’re doing to what the business needs.

If you’re a leader at such a company, please send me an email [first]@[last].org, leave a comment or reach out via linkedin.

The Evolution of Apple's Differential Privacy

Bruce Schneier comments on “Apple’s Differential Privacy:”

So while I applaud Apple for trying to improve privacy within its business models, I would like some more transparency and some more public scrutiny.

Do we know enough about what’s being done? No, and my bet is that Apple doesn’t know precisely what they’ll ship, and aren’t answering deep technical questions so that they don’t mis-speak. I know that when I was at Microsoft, details like that got adjusted as we learned from a bigger pile of real data from real customer use informed things. I saw some really interesting shifts surprisingly late in the dev cycle of various products.

I also want to challenge the way Matthew Green closes: “If Apple is going to collect significant amounts of new data from the devices that we depend on so much, we should really make sure they’re doing it right — rather than cheering them for Using Such Cool Ideas.”

But that is a false dichotomy, and would be silly even if it were not. It’s silly because we can’t be sure if they’re doing it right until after they ship it, and we can see the details. (And perhaps not even then.)

But even more important, the dichotomy is not “are they going to collect substantial data or not?” They are. The value organizations get from being able to observe their users is enormous. As product managers observe what A/B testing in their web properties means to the speed of product improvement, they want to bring that same ability to other platforms. Those that learn fastest will win, for the same reasons that first to market used to win.

Next, are they going to get it right on the first try? No. Almost guaranteed. Software, as we learned a long time ago, has bugs. As I discussed in “The Evolution of Secure Things:”

Its a matter of the pressures brought to bear on the designs of even what (we now see) as the very simplest technologies. It’s about the constant imperfection of products, and how engineering is a response to perceived imperfections. It’s about the chaotic real world from which progress emerges. In a sense, products are never perfected, but express tradeoffs between many pressures, like manufacturing techniques, available materials, and fashion in both superficial and deep ways.

Green (and Schneier) are right to be skeptical, and may even be right to be cynical. We should not lose sight of the fact that Apple is spending rare privacy engineering resources to do better than Microsoft. Near as I can tell, this is an impressive delivery on the commitment to be the company that respects your privacy, and I say that believing that there will be both bugs and design flaws in the implementation. Green has an impressive record of finding and calling Apple (and others) on such, and I’m optimistic he’ll have happy hunting.

In the meantime, we can, and should, cheer Apple for trying.