Threat Modeling Password Managers

There was a bit of a complex debate last week over 1Password. I think the best article may be Glenn Fleishman’s “AgileBits Isn’t Forcing 1Password Data to Live in the Cloud,” but also worth reading are Ken White’s “Who moved my cheese, 1Password?,” and “Why We Love 1Password Memberships,” by 1Password maker AgileBits. I’ve recommended 1Password in the past, and I’m not sure if I agree with Agilebits that “1Password memberships are… the best way to use 1Password.” This post isn’t intended to attack anyone, but to try to sort out what’s at play.

This is a complex situation, and you’ll be shocked, shocked to discover that I think a bit of threat modeling can help. Here’s my model of

what we’re working on:

Password manager

Let me walk you through this: There’s a password manager, which talks to a website. Those are in different trust boundaries, but for simplicity, I’m not drawing those boundaries. The two boundaries displayed are where the data and the “password manager.exe” live. Of course, this might not be an exe; it might be a .app, it might be Javascript. Regardless, that code lives somewhere, and where it lives is important. Similarly, the passwords are stored somewhere, and there’s a boundary around that.

What can go wrong?

If password storage is local, there is not a fat target at Agilebits. Even assuming they’re stored well (say, 10K iterations of PBKDF2), they’re more vulnerable if they’re stolen, and they’re easier to steal en masse [than] if they’re on your computer. (Someone might argue that you, as a home user, are less likely to detect an intruder than Agilebits. That might be true, but that’s a way to detect; the first question is how likely is an attacker to break in? They’ll succeed against you and they’ll succeed against Agilebits, and they’ll get a boatload more from breaking into Agilebits. This is not intended as a slam of Agilebits, it’s an outgrowth of ‘assume breach.’) I believe Agilebits has a simpler operation than Dropbox, and fewer skilled staff in security operations than Dropbox. The simpler operation probably means there are fewer usecases, plugins, partners, etc, and means Agilebits is more likely to notice some attacks. To me, this nets out as neutral. Fleishman promises to explain “how AgileBits’s approach to zero-knowledge encryption… may be less risky and less exposed in some ways than using Dropbox to sync vaults.” I literally don’t see his argument, perhaps it was lost in the complexity of writing a long article? [Update: see also Jeffrey Goldberg’s comment about how they encrypt the passwords. I think of what they’ve done as a very strong mitigation; with a probably reasonable assumption they haven’t bolluxed their key generation. See this 1Password Security Design white paper.]

To net it out: local storage is more secure. If your computer is compromised, your passwords are compromised with any architecture. If your computer is not compromised, and your passwords are nowhere else, then you’re safe. Not so if your passwords are somewhere else and that somewhere else is compromised.

The next issue is where’s the code? If the password manager executable is stored on your device, then to replace it, the attacker either needs to compromise your device, or to install new code on it. An attacker who can install new code on your computer wins, which is why secure updates matter so much. An attacker who can’t get new code onto your computer must compromise the password store, discussed above. When the code is not on your computer but on a website, then the ease of replacing it goes way up. There’s two modes of attack. Either you can break into one of the web server(s) and replace the .js files with new ones, or you can MITM a connection to the site and tamper with the data in transit. As an added bonus, either of those attacks scales. (I’ll assume that 1Password uses certificate pinning, but did not chase down where their JS is served.)

Netted out, getting code from a website each time you run is a substantial drop in security.

What should we do about it?

So this is where it gets tricky. There are usability advantages to having passwords everywhere. (Typing a 20 character random password from your phone into something else is painful.) In their blog post, Agilebits lists more usability and reliability wins, and those are not to be scoffed at. There are also important business advantages to subscription revenue, and not losing your passwords to a password manager going out of business is important.

Each 1Password user needs to make a decision about what the right tradeoff is for them. This is made complicated by family and team features. Can little Bobby move your retirement account tables to the cloud for you? Can a manager control where you store a team vault?

This decision is complicated by walls of text descriptions. I wish is that Agilebits would do a better job of crisply and cleanly laying out the choice that their customers can make, and the advantages and disadvantages of each. (I suggest a feature chart like this one as a good form, and the data should also be in each app as you set things up.) That’s not to say that Agilebits can’t continue to choose and recommend a default.

Does this help?

After years of working in these forms, I think it’s helpful as a way to break out these issues. I’m curious: does it help you? If not, where could it be better?

3 thoughts on “Threat Modeling Password Managers”

  1. Hi, this is Jeffrey Goldberg from AgileBits, the makers of 1Password.

    You are absolutely right to point out that once you have lots of peoples password manager data in one place, it is a big fat juicy target, and we should assume that it will be breached. But we have designed 1Password with more than just PBKDF2 (in our case 100,000 rounds of PBKDF2-HMAC-SHA256). We have added in what we are calling Two-Secret Key Derivation (2SKD) for specifically this case.

    With 2SKD, the user has two local, user-only secrets. Their Master Password and something we are calling a “Secret Key” (previously we called it “Account Key”). The Secret Key is a high entropy value generated by the client on initial sign up. The Secret Key and the Master Password are combined by the client during key derivation to derive the keys that will be used to unlock further encryption keys and long term authentication secrets.

    Because of this 2SKD, data captured from our server cannot be used in password cracking attacks. Without the user’s Secret Key there is just no guessing possible. The attacker would need to first obtain the user’s Secret Key from the target’s own devices, in which case there would be no reason to attack our servers as all. (An attacker who can get the secret key from a local devices will also have a copy of the 1Password data from that same device, and therefore doesn’t really need anything from the 1Password server to begin cracking).

    The role of the Secret Key and 2SKD exactly to make the data that we host “uncrackable” is described in our security white paper and also in some recent blog posts.

      1. [Disclaimer: I’m a security consultant for AgileBit, the makers of 1Password]

        There are two ways.

        The first is we provide a form which includes the Secret Key as well as a space on which you can write your Master Password. The expectation is that you’ll store this form somewhere physically secure, so if you lose all of your devices, you can go to the form and you’ve got your Secret Key. Once you’ve memorized your Master Password you can destroy the copy you’ve got (assuming you ever made one) with your Master Password on it and just keep your Secret Key. In the case of “loss of device” meaning “stolen device” or “potentially found and turned over to a bad person”, changing your Master Password also creates a new Secret Key. Keep in mind that because your Secret Key will be on all of your devices, destruction of a single device means you go to another device, copy the Secret Key back over, and you’re back in business.

        The second has to do with a recovery capability that is baked into the product. A set of users in each account (but not us) has access to all of the vault keys, including a user’s personal vault, which must be decrypted using a set of keys which are only accessible to those users. The recovery process is performed by having a user create a new Master Password, Secret Key and public key. Those vault keys, which are accessible to users with the recovery capability, are then decrypted, and re-encrypted with the user’s public key and added back to the database.

        Both of these processes are a bit more involved than what I’ve written and the entire process is described in the referenced white paper. Additionally, there is no requirement that you print off that form if you don’t want to, and there’s also no requirement that the recovery process be available.

        What happens if the user doesn’t have a copy of their Secret Key and there’s no set of users who can perform recovery? They are truly out of luck and there is absolutely nothing at all we can do.

Comments are closed.