The Dope Cycle and a Deep Breath

Back in January, I wrote about “The Dope Cycle and the Two Minutes Hate.” In that post, I talked about:

Not kidding: even when you know you’re being manipulated into wanting it, you want it. And you are being manipulated, make no mistake. Site designers are working to make your use of their site as pleasurable as possible, as emotionally engaging as possible. They’re caught up in a Red Queen Race, where they must engage faster and faster just to stay in place. And when you’re in such a race, it helps to steal as much as you can from millions of years of evolution. [Edit: I should add that this is not a moral judgement on the companies or the people, but rather an observation on what they must do to survive.] That’s dopamine, that’s adrenaline, that’s every hormone that’s been covered in Popular Psychology. It’s a dope cycle, and you can read that in every sense of the word dope.

I just discovered a fascinating tool from a company called Dopamine Labs. Dopamine Labs is a company that helps their corporate customers drive engagement: “Apps use advanced software tools that shape and control user behavior. We know because [we sell] it to them.” They’ve released a tool called Space: “Space uses neuroscience and AI to help you kick app addiction. No shame. No sponsors. Just a little breathing room to help you take back control.” As they say: “It’s the same math that we use to get people addicted to apps, just run backwards.”

Space app
There are some fascinating ethical questions involved in selling both windows and bricks. I’m going to say that you participants in a red queen race might as well learn what countermeasures to their techniques are by building them. Space works as a Chrome plugin and as an iOS and Android App. I’ve installed it, and I like it more than I like another tool I’ve been using (Dayboard). I really like Dayboard’s todo list, but feel that it cuts me off in the midst of time wasting, rather than walking me away.)

The app is at

As we go into big conferences, it might be worth installing. (Also as we head into conferences, be excellent to each other. Know and respect your limits and those of others. Assume good intent. Avoid getting pulled into a “Drama Triangle.”)

“Comparing the Usability of Cryptographic APIs”

Obstacles Frame

(Abstract) Potentially dangerous cryptography errors are well documented in many applications. Conventional wisdom suggests that many of these errors are caused by cryptographic Application Programming Interfaces (APIs) that are too complicated, have insecure defaults, or are poorly documented. To address this problem, researchers have created several cryptographic libraries that they claim are more usable; however, none of these libraries have been empirically evaluated for their ability to promote more secure development. This paper is the first to examine both how and why the design and resulting usability of different cryptographic libraries affects the security of code written with them, with the goal of understanding how to build effective future libraries. We conducted a controlled experiment in which 256 Python developers recruited from GitHub attempt common tasks involving symmetric and asymmetric cryptography using one of five different APIs. We examine their resulting code for functional correctness and security, and compare their results to their self-reported sentiment about their assigned library. Our results suggest that while APIs designed for simplicity can provide security benefits—reducing the decision space, as expected, prevents choice of insecure parameters—simplicity is not enough. Poor documentation, missing code examples, and a lack of auxiliary features such as secure key storage, caused even participants assigned to simplified libraries to struggle with both basic functional correctness and security. Surprisingly, the availability of comprehensive documentation and easy-to use code examples seems to compensate for more complicated APIs in terms of functionally correct results and participant reactions; however, this did not extend to security results. We find it particularly concerning that for about 20% of functionally correct tasks, across libraries, participants believed their code was secure when it was not. Our results suggest that while new cryptographic libraries that want to promote effective security should offer a simple, convenient interface, this is not enough: they should also, and perhaps more importantly, ensure support for a broad range of common tasks and provide accessible documentation with secure, easy-to-use code examples.

It’s interesting that even when developers took care to consider usability of their APIs, usability testing revealed serious issues. But it’s not surprising. The one constant of usability testing is that people surprise you.

The paper is: “Comparing the Usability of Cryptographic APIs,” Yasemin Acar (CISPA, Saarland University), Michael Backes (CISPA, Saarland University & MPI-SWS), Sascha Fahl (CISPA, Saarland University), Simson Garfinkel (National Institute of Standards and Technology), Doowon Kim (University of Maryland), Michelle Mazurek (University of Maryland), Christian Stransky (CISPA, Saarland University), The Increasingly-misnamed Oakland Conference, 2017.

Threat Modeling Password Managers

There was a bit of a complex debate last week over 1Password. I think the best article may be Glenn Fleishman’s “AgileBits Isn’t Forcing 1Password Data to Live in the Cloud,” but also worth reading are Ken White’s “Who moved my cheese, 1Password?,” and “Why We Love 1Password Memberships,” by 1Password maker AgileBits. I’ve recommended 1Password in the past, and I’m not sure if I agree with Agilebits that “1Password memberships are… the best way to use 1Password.” This post isn’t intended to attack anyone, but to try to sort out what’s at play.

This is a complex situation, and you’ll be shocked, shocked to discover that I think a bit of threat modeling can help. Here’s my model of

what we’re working on:

Password manager

Let me walk you through this: There’s a password manager, which talks to a website. Those are in different trust boundaries, but for simplicity, I’m not drawing those boundaries. The two boundaries displayed are where the data and the “password manager.exe” live. Of course, this might not be an exe; it might be a .app, it might be Javascript. Regardless, that code lives somewhere, and where it lives is important. Similarly, the passwords are stored somewhere, and there’s a boundary around that.

What can go wrong?

If password storage is local, there is not a fat target at Agilebits. Even assuming they’re stored well (say, 10K iterations of PBKDF2), they’re more vulnerable if they’re stolen, and they’re easier to steal en masse [than] if they’re on your computer. (Someone might argue that you, as a home user, are less likely to detect an intruder than Agilebits. That might be true, but that’s a way to detect; the first question is how likely is an attacker to break in? They’ll succeed against you and they’ll succeed against Agilebits, and they’ll get a boatload more from breaking into Agilebits. This is not intended as a slam of Agilebits, it’s an outgrowth of ‘assume breach.’) I believe Agilebits has a simpler operation than Dropbox, and fewer skilled staff in security operations than Dropbox. The simpler operation probably means there are fewer usecases, plugins, partners, etc, and means Agilebits is more likely to notice some attacks. To me, this nets out as neutral. Fleishman promises to explain “how AgileBits’s approach to zero-knowledge encryption… may be less risky and less exposed in some ways than using Dropbox to sync vaults.” I literally don’t see his argument, perhaps it was lost in the complexity of writing a long article? [Update: see also Jeffrey Goldberg’s comment about how they encrypt the passwords. I think of what they’ve done as a very strong mitigation; with a probably reasonable assumption they haven’t bolluxed their key generation. See this 1Password Security Design white paper.]

To net it out: local storage is more secure. If your computer is compromised, your passwords are compromised with any architecture. If your computer is not compromised, and your passwords are nowhere else, then you’re safe. Not so if your passwords are somewhere else and that somewhere else is compromised.

The next issue is where’s the code? If the password manager executable is stored on your device, then to replace it, the attacker either needs to compromise your device, or to install new code on it. An attacker who can install new code on your computer wins, which is why secure updates matter so much. An attacker who can’t get new code onto your computer must compromise the password store, discussed above. When the code is not on your computer but on a website, then the ease of replacing it goes way up. There’s two modes of attack. Either you can break into one of the web server(s) and replace the .js files with new ones, or you can MITM a connection to the site and tamper with the data in transit. As an added bonus, either of those attacks scales. (I’ll assume that 1Password uses certificate pinning, but did not chase down where their JS is served.)

Netted out, getting code from a website each time you run is a substantial drop in security.

What should we do about it?

So this is where it gets tricky. There are usability advantages to having passwords everywhere. (Typing a 20 character random password from your phone into something else is painful.) In their blog post, Agilebits lists more usability and reliability wins, and those are not to be scoffed at. There are also important business advantages to subscription revenue, and not losing your passwords to a password manager going out of business is important.

Each 1Password user needs to make a decision about what the right tradeoff is for them. This is made complicated by family and team features. Can little Bobby move your retirement account tables to the cloud for you? Can a manager control where you store a team vault?

This decision is complicated by walls of text descriptions. I wish is that Agilebits would do a better job of crisply and cleanly laying out the choice that their customers can make, and the advantages and disadvantages of each. (I suggest a feature chart like this one as a good form, and the data should also be in each app as you set things up.) That’s not to say that Agilebits can’t continue to choose and recommend a default.

Does this help?

After years of working in these forms, I think it’s helpful as a way to break out these issues. I’m curious: does it help you? If not, where could it be better?

Umbrella Sharing and Threat Modeling

Shared umbrellas2 framed

A month or so ago, I wrote “Bicycling and Threat Modeling,” about new approaches to bike sharing in China. Now I want to share with you “Umbrella-sharing startup loses nearly all of its 300,000 umbrellas in a matter of weeks.”

The Shenzhen-based company was launched earlier this year with a 10 million yuan investment. The concept was similar to those that bike-sharing startups have used to (mostly) great success. Customers use an app on their smartphone to pay a 19 yuan deposit fee for an umbrella, which costs just 50 jiao for every half hour of use.

According to the South China Morning Post, company CEO Zhao Shuping said that the idea came to him after watching bike-sharing schemes take off across China, making him realize that “everything on the street can now be shared.”

I don’t know anything about the Shanghaiist, but it’s quoting a story in the South China Morning Post, which closes:

Last month, a bicycle loan company had to close after 90 per cent of its bikes were stolen.

Secure updates: A threat model

Software updates

Post-Petya there have been a number of alarming articles on insecure update practices. The essence of these stories is that tax software, mandated by the government of Ukraine, was used to distribute the first Petya, and that this can happen elsewhere. Some of these stories are a little alarmist, with claims that unnamed “other” software has also been used in this way. Sometimes the attack is easy because updates are unsigned, other times its because they’re also sent over a channel with no security.

The right answer to these stories is to fix the damned update software before people get more scared of updating. That fear will survive long after the threat is addressed. So let me tell you, [as a software publisher] how to do secure upadtes, in a nutshell.

The goals of an update system are to:

  1. Know what updates are available
  2. Install authentic updates that haven’t been tampered with
  3. Strongly tie updates to the organization whose software is being updated. (Done right, this can also enable whitelisting software.)

Let me elaborate on those requirements. First, know what updates are available — the threat here is that an attacker stores your message “Version 3.1 is the latest revision, get it here” and sends it to a target after you’ve shipped version 3.2. Second, the attacker may try to replace your update package with a new one, possibly using your keys to sign it. If you’re using TLS for channel security, your TLS keys are only as secure as your web server, which is to say, not very. You want to have a signing key that you protect.

So that’s a basic threat model, which leads to a system like this:

  1. Update messages are signed, dated, and sequenced. The code which parses them carefully verifies the signatures on both messages, checks that the date is later than the previous message and the sequence number is higher. If and only if all are true does it…
  2. Get the software package. I like doing this over torrents. Not only does that save you money and improve availability, but it protects you against the “Oh hello there Mr. Snowden” attack. Of course, sometimes a belief that torrents have the “evil bit” set leads to blockages, and so you need a fallback. [Note this originally called the belief “foolish,” but Francois politely pointed out that that was me being foolish.]
  3. Once you have the software package, you need to check that it’s signed with the same key as before.
    Better to sign the update and the update message with a key you keep offline on a machine that has no internet connectivity.

  4. Since all of the verification can be done by software, and the signing can be done with a checklist, PGP/GPG are a fine choice. It’s standard, which means people can run additional checks outside your software, it’s been analyzed heavily by cryptographers.

What’s above follows the four-question framework for threat modeling: what are we working on? (Delivering updates securely); what can go wrong? (spoofing, tampering, denial of service); what are we going to do about it? (signatures and torrents). The remaining question is “did we do a good job?” Please help us assess that! (I wrote this quickly on a Sunday morning. Are there attacks that this design misses? Defenses that should be in place?)

Worthwhile Books: Q2 2017

I’m always looking for interesting books to read. These are the books that I enjoyed enough to recommend in Q2.


Nonfiction, not security

  • Narrative and Numbers, Aswath Damodaran. Presents a compelling approach for using narrative and numbers to discuss business valuation, but the lessons can be extended and used in many places. Also worthwhile is his focus on improving stories by testing them and seeking out contrary views.
  • The End of Average, by Todd Rose. Rose uses narrative to make the case that the mean is not the distribution, and that focusing in on averages leads to all sorts of problems.
  • A Sense of Style, Steven Pinker. I learned a number of things about how to write clearly and how the brain processes words. Some of those things will be in the next edition of Threat Modeling.
  • Starman, Jamie Doran. A biography of Yuri Gagarin, the first person in space.
  • Spacesuit: Fashioning Apollo, Nicholas de Monchaux. A really fascinating socio-technical history of the Apollo Spacesuit and the interactions between NASA and their systems approaches and the International Latex Company, who at the time, mainly made women’s undergarments under the Playtex Brand. NASA was focused on manufacturing from plans, ILC fashioned from patterns. The engineered suits didn’t function as clothing. ILC once sent NASA a silent filmstrip of an space-suited employee playing football as part of their argument for their approach. (As an aside, I re-wrote the first sentence here to put the long dependent clause at the end, because of advice in Pinker, and the sentence is better for it.)


  • Underground Airlines by Ben Winters. What if Lincoln had been shot, the civil war averted, and slavery was still legal in a “hard four” southern states? Not a breezy read, but fascinating alternate history.
  • Seven Surrenders by Ada Palmer. The second book in a quartet chronicling in the 23rd century. An interestingly non-standard future with deep layers of complexity. Challenging reading because of the language, the nicknames and Palmer’s fascinating lens on gender, but easier than her first book, Too Like the Lightning. Searching this blog, I am surprised that I never linked to her excellent blog, Ex Urbe. Also, there’s a Crooked Timber seminar on the series.
  • Yesterday’s Kin, Nancy Kress. Nancy Kress, need I say more? Apparently, I do, there’s a trilogy coming out, and the first book, Tomorrow’s Kin, is out shortly.

Threat Modeling Encrypted Databases

Adrian Colyer has an interesting summary of a recent paper, “Why your encrypted database is not secure” in his excellent “morning paper” blog.

If we can’t offer protection against active attackers, nor against persistent passive attackers who are able to simply observe enough queries and their responses, the fallback is to focus on weaker guarantees around snapshot attackers, who can only obtain a single static observation of the compromised system (e.g., an attacker that does a one-off exfiltration). Today’s paper pokes holes in the security guarantees offered in the face of snapshots attacks too.

Many recent encrypted databases make strong claims of “provable security” against snapshot attacks. The theoretical models used to support these claims are abstractions. They are not based on analyzing the actual information revealed by a compromised database system and how it can be used to infer the plaintext data.

I take away two things: first, there’s a coalescence towards a standard academic model for database security, and it turns out to be a grounded model. (In contrast to models like the random oracle in crypto.) Second, all models are wrong, and it turns out that the model of a snapshot attacker seems…not all that useful.

Voter Records, SSN and Commercial Authentication


A Wednesday letter from the Presidential Advisory Commission on Election Integrity gives secretaries of state about two weeks to provide about a dozen points of voter data. That also would include dates of birth, the last four digits of voters’ Social Security numbers… (NYTimes story) Of this writing, 44 states have refused.

I want to consider only the information security aspects of the letter, which also states that “Please be aware that any documents that are submitted to the full Commission will also be made available to the public.”

Publishing a list of SSNs is prohibited by 42 USC 405(c)(2)(C)(Viii), but that only applies to “SSNs or related record[s].” Related record means “any record, list, or compilation that indicates, directly or indirectly, the identity of any individual with respect to whom a social security account number or a request for a social security account number is maintained pursuant to this clause.” So its unclear to me if that law prohibits publishing the last 4 digits of the SSN in this way.

So, if a list of names, addresses, datas of birth and last four digits of the SSN of every voter are made available, what does that to to they myth that those selfsame four digits can be used as an authenticator?

I’d like to thank the administration for generating so much winning in authentication, and wish the very best of luck to everyone who now needs to scramble to find an alternate authentication technique.

Image credit: Jeff Hunsaker, “Verified by Visa: Everything We Tell Folks to Avoid.”