Certifiably Silly
Over at “The Security Practice,” Michael Barrett writes about “Firefox 3.0 and self-signed certificates.” Neither he or I are representing our respective employers.
…almost everyone who wants to communicate securely using a browser can afford an SSL certificate from CAs such as GoDaddy, Thawte, etc. The cost of single certificates from these sources can only be described as nominal.
There are all sorts of use cases where $29 is not chump change. For example, I own about 8 domains, that’s $240 in “security taxes.” People in the third world would like to communicate securely. But most importantly, the idea assumes that it’s ok to have an infrastructure which is mostly unencrypted, and we may only trust encryption only after the certificate priests bless it. When I wrote about turning on “opportunistic encryption for PostFix,” my goal was encrypting all email. There’s no need for a CA. The threat model is passive adversaries, and there are lots of those.
My company is a major target of phishing, and as such we’ve spent quite a bit of time researching what anti-phishing approaches work We published a whitepaper on this topic (which can be found on the company blog at www.thepaypalblog.com), which explains this in detail. However, a couple of relevant conclusions are that: 1) the vast majority of users simply want to be protected, 2) there’s no single “silver bullet”, and 3) that what we describe as “safer browsers” such as IE 7, and Firefox 3.0 are a significant part of the solution based on their improvements in user visible security indicators and secure-by-default behaviors.
You can’t always get what you want. Really, most people have little understanding of the issues. I think this is in large part because we’ve been talking down to them, in some part because the issues are complex, and in some part because it’s not important enough for them to want to become educated. It’s especially not important enough in light of debates like this one. We should try (sometime) to give people what they need.
I think we’d agree that the vast majority of users want, need and deserve protection that’s as simple and effective as we can make it. I don’t think blocking self-signed certs is a large part of that goal.
I conflated two or three separate ideas in that last sentence, and I should explain them. The general logic is that most users should never be presented with a security dialog that gives them a choice – if they are, there’s typically at least a 50:50 chance that the wrong decision will be made. Instead, the browser should make the decision for them. However, in the case of self-signed certificates it’s almost impossible to see how any technology can disambiguate between legitimate uses and criminal ones.
When viewed through this lens, the changes to the Firefox user experience for self-signed certificates makes perfect sense.
Even viewed through the lens presented, the self-signed experience doesn’t make perfect sense, unless you start with the assumption that a $29 SSL cert has some useful security value. I don’t believe it does. What it does is get rid of the ‘self-signed’ warnings. There are cheaper and easier ways to do that. Most of the certificates out there are signed by a company that the relying consumers have never heard of. There’s just not that much verification that can be done for $29. Today, anyone who’s broken into a company’s mail server can buy a fake cert with a stolen credit card.
Now, Michael’s employer is under massive attack. I am sympathetic to their desire to improve things, and I applaud a lot of things that they do. For example, their use of one time password tokens is great. I also think there’s great value to pushing people to recent browsers.
At the same time, it’s sensible for them to want to shift risk-part of me even welcomes the risks and attacks hitting the CAs. But I think that imposing yet another security tax, based on a static analysis of attackers, and some certificate authority pixie dust isn’t going to help things for very long.
And given the very real costs and the very fuzzy benefits, I think that breaking self-signed certificates is the wrong approach. What’s the right approach? I wrote “Preserving the Internet Channel Against Phishers” three years ago. I think that the advice isn’t silly at all.
I think the most important part to remember is that self signed certs offer considerably higher level of security than pure HTTP does. So, why doesn’t Firefox and IE toss up a huge security warning for every HTTP transaction?
I am not a GUI designer, but somehow I think the sane solution would be to show a difference between HTTP, self signed certs and normal certs. On the other hand I am rather suspicious about the PKI in general, and especially the fact that browsers are not configured by default to check CRL…
You can have both.
See https://cacert.org
Could we take the cost issue out of this equation please (e.g., statements like “There are all sorts of use cases where $29 is not chump change”)? Because StartCom (http://www.startcom.org/) offers basic no-charge SSL certificates that are usable with Firefox today, and quite possibly with IE, Safari, and Opera in future. (As Matthias notes, CAcert also offers no-charge certs, though they’re not recognized by default in Firefox because CAcert doesn’t yet met Mozilla requirements for including a CA — requirements that were specifically designed not to exclude nonprofit CAs like CAcert.)
I think the cost issue is a red herring, or nearly so; IMO it can and will be addressed one way or the other, if not by StartCom or CAcert then by someone else. The real questions as I see it are
1) Leaving aside the issue of cost, what are the pros and cons of introducing self-signed certificates into the current browser model of SSL?
2) If the advantages of introducing self-signed certificates into this model outweigh the disadvantages, what is the best approach (from a technical and user experience perspective) to introduce self-signed certificates into the current SSL model?
3) If there is a good technical/UX approach to introduce self-signed certificates into the current SSL model, what is the likelihood of such an approach being adopted on a universal basis (i.e., by all browser vendors), and how might this be made more likely?
What do you find non-compelling about Jonathan’s post here: http://blog.johnath.com/2008/08/05/ssl-question-corner/
Is your major complaint the general architecture of HTTP and HTTPS, the existing status quo, or Firefox’s slightly different new behavior?
There are a lot of browser behaviors we could modify related to these sorts of things. Things like the ForceHTTPS proposal from Collin Jackson and Adam Barth. Things like new UI in Firefox that tells you whether you’ve visited a given site a number of times before, etc.
Your solutions to phishing, while a part of the picture, aren’t a complete solution and don’t fully address the problem.
Frank, very interesting questions, I’ll try to respond in depth tonight.
Andy, I don’t believe certification as done today adds trust. In the course of this conversation, I’m learning that there are CAs I’ve never heard of added to the FF root store.
How does having these StartCom people sign some bits influence when I should believe what’s in a certificate? What’s the origin of my trust in StartCom, an organization I hadn’t heard of before today?
Also, I’d love to hear how my thoughts in ‘preserving’ are not complete–what parts of the problem survive in relevant ways? (My goal in creating them was to break what I see as the crux–the links in email.)
“What’s the origin of my trust in StartCom, an organization I hadn’t heard of before today?” I think the most straightforward answer is that this is a case where (at least some) users are implicitly relying on the browser vendor to make decisions like this on their behalf. In this particular case I (acting as Mozilla’s designated person for this sort of thing) do happen to know who StartCom is, and made a conscious decision to approve including its root certificate in Mozilla.
Some users don’t like browser vendors deciding things for them, some do (or, at least, we don’t hear them objecting to it). Users who don’t like it typically do one of two things. Some use the built-in browser mechanisms to change how browsers work for them (e.g., adding or disabling particular root CAs), and some lobby the browser vendors to change the way the browsers work for everyone.
IMO those who do the latter in the case of SSL and self-signed certs need to make a compelling case for why and how this should be done, given that this would be a pretty major change to the traditional browser model.
I’m currently doing work for a company selling perimeter security devices to small-and-medium businesses. It is operated through a web UI over HTTPS.
We need to do this via a self-signed cert; it’s impossible to do it any other way, unless we do some sickening level of DNS poisoning.
The Firefox 3 warning is much better than the Internet Explorer 7 warning. FF3 gives a warning and tells you “legitimate public sites will not ask you to do this,” whereas IE7 just panics and says “don’t continue, I’m warning you.”
Although the self-signed warnings work directly against us — we have to explain to a customer at least once a week why the browsers do it, and why it’s a good idea for them to do it, and why we cannot comply — it’s still a better security posture than what comes before.
Now, EV certs? Bah, what a distraction. Unless you’re a CA, in which case it’s a nice new revenue source.
“Also, I’d love to hear how my thoughts in ‘preserving’ are not complete–what parts of the problem survive in relevant ways?” First, I think your advice in “Preserving” is still perfectly valid, and I wish more financial institutions would follow it.
I think one major issue is that it’s not always convenient to reach a financial institution through a bookmark. For example, I recently bought Girl Talk’s new album “Feed the Animals” online, following a link from his site to a Paypal-hosted payment page at which I could login in and make the payment. This was certainly more convenient than going separately to a bookmarked Paypal page to authorize a payment.
Also, Firefox 3 put up the “PayPal Inc. (US)” indicator based on PayPal’s use of an EV cert, which was a nice verification that I was signing in to the right service. Though to be fair, this same type of indicator could also have been provided by the browser based on a prior decision by me to designate PayPal as a site I go to make payments. (In fact, this could have been an optional capability in the bookmarks system.)
So the advice I would add for financial institutions is: First, get an EV cert; I don’t see any real downside for doing this, and I think it adds useful information for users. And second, if/when browsers support EV-like indicators for arbitrary user-designated sites, recommend that users do this in addition to bookmarking the site in the normal way, to take care of cases where the user happens to end up on the site in the course of browsing other sites.
Adam,
From a completeness perspective plain old end-user training isn’t really an option right now. If I stopped sending HTML mail with links in it today, how exactly would I communicate that successfully to my customers so that they wouldn’t just click the links in HTML email from the phisher?
Additionally, any large company runs lots of websites. They are customized on a per-country basis, or they are other somewhat related sites but for security reasons (same origin policy for example) you don’t want them on your core domain. So, you end up with lots of URLs.
Frank’s point about checking out with PayPal is pretty important. Not everyone is trying to just periodically log in and check their balance or statement.
What this means is that your description of how to solve this problem neglects a lot of real world situations and uses of websites. Your guidance applies to a bank perhaps, but certainly not to many other large e-commerce sites.
I think one of the problems in the space is that the no one has explained to either the everyday user’s of the net or the commercial customers on the net what all of this means. The SSL EV green bar means “You know who is at the other end”, but it is often described to users as “you’re talking to a trusted site” or “you’re safe here”.
The problem is that a EV site with a cross scripting bug isn’t a lot like any of those, certainly if someone exploits the bug and steals your ID then you certainly aren’t safe, you only know who one of the entities you are talking to, and your trust in them may feel more than a little misplaced.
So long as we oversimplify to the masses, they won’t understand, they will be surprised and things will break. And before we tell them what it all meaqns we do rather need to decide amongst ourselves, and I don’t think we’re in great accord.
JimB.
Andy,
When I say training, I mean implicit training. People are pattern-sensing *machines*. we sense patterns everywhere, regardless of if they’re real or not. It’s part of how we make sense of a big world. Right now, users see conflicting messages from their financial institutions and across them, and so can’t construct good patterns, which is the goal of implicit training.