Celebrating Alt-Left Lawlessness

Lately, I’ve tried to stay away from the tire fire that American politics has become. I’m reasonably certain that I have more to contribute in other areas. But when the President tries to equivocate between those waving the Nazi flag and those protesting against them, we need to speak about what’s acceptable.

It ought to go without saying that when literal Nazis are on one side of a debate, the other side is in the right.

But apparently, that’s not obvious, so I felt I could share a plan for a march by the alt-left, under the ominous name of “Operation Overlord.” They were planning to overthrow the legitimate government all along the coast, and, through force, replace it with their own puppets.

More seriously, we can have disagreements about what’s best for the country, and it’s bad when we demonize those who disagree with us. Civilized society requires us to accept civil disagreement. It accepts that no one is privileged or disadvantaged by an accident of birth: “race, creed or color,” as the expression goes. But civil disagreement, by definition, precludes violence, advocacy of violence or threats of violence.

The Nazi flag is one such threat. Waving it has no purpose except declaring oneself outside society and at odds with the ideals and principles of good people everywhere.

If you’re in a crowd of Nazis, you should be asking why, and walking away.

If you have doubts about what a President should say, here’s a sample:

Amicus brief in “Carpenter” Supreme Court Case

“In an amicus brief filed in the U.S. Supreme Court, leading technology experts represented by the Knight First Amendment Institute at Columbia University argue that the Fourth Amendment should be understood to prohibit the government from accessing location data tracked by cell phone providers — “cell site location information” — without a warrant.”

For more, please see “In Supreme Court Brief, Technologists Warn Against Warrantless Access to Cell Phone Location Data.” [Update: Susan Landau has a great blog post “Phones Move – and So Should the Law” in which she frames the issues at hand.]

I’m pleased to be one of the experts involved.

Learning From npm’s Rough Few Months

The node package manager (npm) is having a bad few months. Let’s look at what we can do, what other package managers should do and what we can learn at a policy level, particularly in the U.S. framing of “critical infrastructure.”

People in security who remain focused on the IT side of the house, rather than the development side, may not be familiar with npm. As its website says, “npm is the package manager for JavaScript and the world’s largest software registry. Discover packages of reusable code — and assemble them in powerful new ways.” Odds are excellent that one or more of your websites rely on npm.

I wrote a long post on the subject at the IANS blog.

The Evolution of Ctenophore Brains

From his very first experiments, he could see that these animals were unrelated to jellyfish. In fact, they were profoundly different from any other animal on Earth.

Moroz reached this conclusion by testing the nerve cells of ctenophores for the neurotransmitters serotonin, dopamine and nitric oxide, chemical messengers considered the universal neural language of all animals. But try as he might, he could not find these molecules. The implications were profound.

Read “Aliens in our midst” at Aeon.

Magical Approaches to Threat Modeling

I was watching a talk recently where the speaker said “STRIDE produces waaaay to many threats! What we really want is a way to quickly get the right threats!”*

He’s right and he’s wrong. There are exactly three ways to get to a short list of the most meaningful threats to a new product, system or service that you’re building. They are:

  • Magically produce the right list
  • Have experts who are so good they never even think about the wrong threat
  • Produce a list that’s too long and prune it

That’s it. (If you see a fourth, please tell me!)

Predictions are hard, especially about the future. It’s hard to know what’s going to go wrong in a system under construction, and its harder when that system changes because of your prediction.

So if we don’t want to rely on Harry Potter waving a wand, getting frustrated, and asking Hermione to create the right list, then we’re left with either trusting experts or over-listing and pruning.

Don’t get me wrong. It would be great to be able to wave a magic wand or otherwise rapidly produce the right list without feeling like you’d done too much work. And if you always produce a short list, then your short list is likely to appear to be right.

Now, you may work in an organization with enough security expertise to execute perfect threat models, but I never have, and none of my clients seem to have that abundance either. (Which may also be a Heisenproblem: no organization with that many experts needs to hire a consultant to help them, except to get all their experts aligned.)

Also I find that when I don’t use a structure, I miss threats. I’ve noticed that I have a recency bias, towards attacks I’ve seen recently, and bias towards “fun” attacks, including spoofing these days because I enjoy solving those. And so I use techniques like STRIDE per element to help structure my analysis.

It may also be that approaches other than STRIDE produce lists that have a higher concentration of interesting threats, for some definition of “interesting.” Fundamentally, there’s a set of tradeoffs you can make. Those tradeoffs include:

  • Time taken
  • Coverage
  • Skill required
  • Consistency
  • Magic pixie dust required

I’m curious, what other tradeoffs have you seen?

Whatever tradeoffs you may make, given a choice between overproduction and underproduction, you probably want to find too many threats, rather than too few. (How do you know what you’re missing?) Some of getting the right number is the skill that comes from experience, and some of it is simply the grindwork of engineering.

(* The quote is not exact, because I aim to follow Warren Buffett’s excellent advice of praise specifically, criticize generally.)

Photo: Magician, by ThaQeLa.

The Dope Cycle and a Deep Breath

Back in January, I wrote about “The Dope Cycle and the Two Minutes Hate.” In that post, I talked about:

Not kidding: even when you know you’re being manipulated into wanting it, you want it. And you are being manipulated, make no mistake. Site designers are working to make your use of their site as pleasurable as possible, as emotionally engaging as possible. They’re caught up in a Red Queen Race, where they must engage faster and faster just to stay in place. And when you’re in such a race, it helps to steal as much as you can from millions of years of evolution. [Edit: I should add that this is not a moral judgement on the companies or the people, but rather an observation on what they must do to survive.] That’s dopamine, that’s adrenaline, that’s every hormone that’s been covered in Popular Psychology. It’s a dope cycle, and you can read that in every sense of the word dope.

I just discovered a fascinating tool from a company called Dopamine Labs. Dopamine Labs is a company that helps their corporate customers drive engagement: “Apps use advanced software tools that shape and control user behavior. We know because [we sell] it to them.” They’ve released a tool called Space: “Space uses neuroscience and AI to help you kick app addiction. No shame. No sponsors. Just a little breathing room to help you take back control.” As they say: “It’s the same math that we use to get people addicted to apps, just run backwards.”

Space app
There are some fascinating ethical questions involved in selling both windows and bricks. I’m going to say that you participants in a red queen race might as well learn what countermeasures to their techniques are by building them. Space works as a Chrome plugin and as an iOS and Android App. I’ve installed it, and I like it more than I like another tool I’ve been using (Dayboard). I really like Dayboard’s todo list, but feel that it cuts me off in the midst of time wasting, rather than walking me away.)

The app is at http://youjustneedspace.com/.

As we go into big conferences, it might be worth installing. (Also as we head into conferences, be excellent to each other. Know and respect your limits and those of others. Assume good intent. Avoid getting pulled into a “Drama Triangle.”)

“Comparing the Usability of Cryptographic APIs”

Obstacles Frame

 

(The abstract:) Potentially dangerous cryptography errors are well documented in many applications. Conventional wisdom suggests that many of these errors are caused by cryptographic Application Programming Interfaces (APIs) that are too complicated, have insecure defaults, or are poorly documented. To address this problem, researchers have created several cryptographic libraries that they claim are more usable; however, none of these libraries have been empirically evaluated for their ability to promote more secure development. This paper is the first to examine both how and why the design and resulting usability of different cryptographic libraries affects the security of code written with them, with the goal of understanding how to build effective future libraries. We conducted a controlled experiment in which 256 Python developers recruited from GitHub attempt common tasks involving symmetric and asymmetric cryptography using one of five different APIs.
We examine their resulting code for functional correctness and security, and compare their results to their self-reported sentiment about their assigned library. Our results suggest that while APIs designed for simplicity can provide security
benefits—reducing the decision space, as expected, prevents choice of insecure parameters—simplicity is not enough. Poor
documentation, missing code examples, and a lack of auxiliary features such as secure key storage, caused even participants
assigned to simplified libraries to struggle with both basic functional correctness and security. Surprisingly, the
availability of comprehensive documentation and easy-to use code examples seems to compensate for more complicated APIs in terms of functionally correct results and participant reactions; however, this did not extend to security results. We find it particularly concerning that for about 20% of functionally correct tasks, across libraries, participants believed their code was secure when it was not. Our results suggest that while new cryptographic libraries that want to promote effective security should offer a simple, convenient interface, this is not enough: they should also, and perhaps more importantly, ensure support for a broad range of common tasks and provide accessible documentation with secure, easy-to-use code examples.

It’s interesting that even when developers took care to consider usability of their APIs, usability testing revealed serious issues. But it’s not surprising. The one constant of usability testing is that people surprise you.

The paper is: “Comparing the Usability of Cryptographic APIs,” Yasemin Acar (CISPA, Saarland University), Michael Backes (CISPA, Saarland University & MPI-SWS), Sascha Fahl (CISPA, Saarland University), Simson Garfinkel (National Institute of Standards and Technology), Doowon Kim (University of Maryland), Michelle Mazurek (University of Maryland), Christian Stransky (CISPA, Saarland University), The Increasingly-misnamed Oakland Conference, 2017.

Humble Bundle

There’s a Humble Bundle on Cybersecurity, full of Wiley books. It includes my threat modeling book, Ross Anderson’s Security Engineering, Ferguson, Schneier and Kohno’s Crypto Engineering and more.

Wiley bundle twitter post

I hope that this is the best price you’ll ever see on these books. Get ’em while they’re hot.

The bundle goes to support EFF &/or Water Aid America.

Threat Modeling Password Managers

There was a bit of a complex debate last week over 1Password. I think the best article may be Glenn Fleishman’s “AgileBits Isn’t Forcing 1Password Data to Live in the Cloud,” but also worth reading are Ken White’s “Who moved my cheese, 1Password?,” and “Why We Love 1Password Memberships,” by 1Password maker AgileBits. I’ve recommended 1Password in the past, and I’m not sure if I agree with Agilebits that “1Password memberships are… the best way to use 1Password.” This post isn’t intended to attack anyone, but to try to sort out what’s at play.

This is a complex situation, and you’ll be shocked, shocked to discover that I think a bit of threat modeling can help. Here’s my model of

what we’re working on:

Password manager

Let me walk you through this: There’s a password manager, which talks to a website. Those are in different trust boundaries, but for simplicity, I’m not drawing those boundaries. The two boundaries displayed are where the data and the “password manager.exe” live. Of course, this might not be an exe; it might be a .app, it might be Javascript. Regardless, that code lives somewhere, and where it lives is important. Similarly, the passwords are stored somewhere, and there’s a boundary around that.

What can go wrong?

If password storage is local, there is not a fat target at Agilebits. Even assuming they’re stored well (say, 10K iterations of PBKDF2), they’re more vulnerable if they’re stolen, and they’re easier to steal en masse [than] if they’re on your computer. (Someone might argue that you, as a home user, are less likely to detect an intruder than Agilebits. That might be true, but that’s a way to detect; the first question is how likely is an attacker to break in? They’ll succeed against you and they’ll succeed against Agilebits, and they’ll get a boatload more from breaking into Agilebits. This is not intended as a slam of Agilebits, it’s an outgrowth of ‘assume breach.’) I believe Agilebits has a simpler operation than Dropbox, and fewer skilled staff in security operations than Dropbox. The simpler operation probably means there are fewer usecases, plugins, partners, etc, and means Agilebits is more likely to notice some attacks. To me, this nets out as neutral. Fleishman promises to explain “how AgileBits’s approach to zero-knowledge encryption… may be less risky and less exposed in some ways than using Dropbox to sync vaults.” I literally don’t see his argument, perhaps it was lost in the complexity of writing a long article? [Update: see also Jeffrey Goldberg’s comment about how they encrypt the passwords. I think of what they’ve done as a very strong mitigation; with a probably reasonable assumption they haven’t bolluxed their key generation. See this 1Password Security Design white paper.]

To net it out: local storage is more secure. If your computer is compromised, your passwords are compromised with any architecture. If your computer is not compromised, and your passwords are nowhere else, then you’re safe. Not so if your passwords are somewhere else and that somewhere else is compromised.

The next issue is where’s the code? If the password manager executable is stored on your device, then to replace it, the attacker either needs to compromise your device, or to install new code on it. An attacker who can install new code on your computer wins, which is why secure updates matter so much. An attacker who can’t get new code onto your computer must compromise the password store, discussed above. When the code is not on your computer but on a website, then the ease of replacing it goes way up. There’s two modes of attack. Either you can break into one of the web server(s) and replace the .js files with new ones, or you can MITM a connection to the site and tamper with the data in transit. As an added bonus, either of those attacks scales. (I’ll assume that 1Password uses certificate pinning, but did not chase down where their JS is served.)

Netted out, getting code from a website each time you run is a substantial drop in security.

What should we do about it?

So this is where it gets tricky. There are usability advantages to having passwords everywhere. (Typing a 20 character random password from your phone into something else is painful.) In their blog post, Agilebits lists more usability and reliability wins, and those are not to be scoffed at. There are also important business advantages to subscription revenue, and not losing your passwords to a password manager going out of business is important.

Each 1Password user needs to make a decision about what the right tradeoff is for them. This is made complicated by family and team features. Can little Bobby move your retirement account tables to the cloud for you? Can a manager control where you store a team vault?

This decision is complicated by walls of text descriptions. I wish is that Agilebits would do a better job of crisply and cleanly laying out the choice that their customers can make, and the advantages and disadvantages of each. (I suggest a feature chart like this one as a good form, and the data should also be in each app as you set things up.) That’s not to say that Agilebits can’t continue to choose and recommend a default.

Does this help?

After years of working in these forms, I think it’s helpful as a way to break out these issues. I’m curious: does it help you? If not, where could it be better?