Why Johnny Can’t Bank Safely
Stuart E. Schechter, Rachna Dhamija, Andy Ozment, and Ian Fischer have written a paper which examines the behavior of persons doing on-line banking under various experimentally-manipulated conditions.
The paper is getting some attention, for example in the New York Times and at Slashdot.
What Schechter, et. al. find is that despite increasingly alarming indicators that something may be amiss, subjects frequently provided their passwords to an on-line banking site with which they were at least somewhat familiar. Absence of indicators that SSL is used, and absence of an image-based site authenticity indicator (such as SiteKey — although the authors do not mention which bank was involved in the study — are almost entirely ignored by subjects. Only a relatively dire IE7-style warning page seems to dissuade the subjects, and even then over a third logged in even when their real credentials, at their real bank, were involved.
The press is focusing on the Sitekey angle. The hook seems to be this: even when this highly-touted anti-phishing feature is absent (and a suspicious text box left in its place), people merrily supply their passwords. Therefore, Sitekey doesn’t help.
Another aspect of this study is worthy of note. One of the experimental treatments was whether subjects used their own account credentials, or whether — as instructed by the researchers — they played the role of a fictitious person using credentials supplied by the researchers (with and without a lecture about security).
Unshockingly enough, people behaved “more securely” (my words, not the study’s) when their real bank accounts were on the line.
So, even if we know that people act more securely when they have some skin in the game, how do we explain it when they nonetheless do seemingly dumb things?
This is where I want to see some follow-up work. If the Sitekey-style images aren’t there, and if people have been warned to look for them, what were they thinking when they just clicked on by? Why were they thinking that? Why weren’t they thinking precisely what they had been told to think — namely that this could be an attempt at fraud? When a blatant message was presented, the equivalent of a blinking neon sign, it helped, but why did a third of people disregard it? Did they read it? Was it “pop-up fatigue” at work? Do people not care about SSL indicators because they’ve seen one too many “secure login” pages that collect creds via HTTP-based forms and simply POST them via SSL? Is it that all this web security stuff is indistinguishable from magic (hard to believe of the young Harvard-area types that were the subjects of this study, but hey, maybe they were visiting from Somerville or Boston)?
These are important questions, and more and more is riding on them.
I haven’t seen any figures on losses due to phishing that I can remember offhand, but I strongly suspect that they are on the rise. Moreover, as operating systems and web browsers become more secure, it’s increasingly important for businesses like banks to understand the human side of these technologies because that’s where fraudsters will take aim. What people think when they interact with computers, the mental models they use, how they react to cues presented to them by applications and web sites, and how all of these mix with things they already know (or believe) about sites (“It must be reliable — it’s FooBarCoLand National Bank”) are things that will increase in importance.
I’m eager to learn more.
(Credit where credit’s due: 0, 1)
My suspicion is that people ignore a missing SiteKey because they’ve been trained, through experience, to expect websites to be flakey. How often have you gone to a popular site and had some of the images not load? If you’re like me, pretty frequently.
I agree with what Orv has said. People are so used to computer systems crashing and their behavior deviating from the ‘correct’ way that they seem to have developed a ‘ignore’ rule to any deviant behavior.
Strangely enough, the results of the behavior will not come as a surprise to at least one bank I know of. A year or so ago, when I had the chance to talk to the one of the bigshots in the bank’s security and fraud division, he had explicitly mentioned that the system is so vulnerable to people’s ignorant attitude that he for one did not see the ROI on the system. They had to still implement it because that was the ‘in’ thing!
Without saying, that bank will remain anonymous here 🙂
“People are so used to computer systems crashing and their behavior deviating from the ‘correct’ way that they seem to have developed a ‘ignore’ rule to any deviant behavior.”
This is a really great point. Add in the reliability (from a user standpoint) of web systems and you have a perfect storm of low expectations.
There a few things to think about within context.
First, is that this group has an agenda. It’s “usable security”, which is fine, we all have our windmills to charge after (mine happens to be risk). In their case, you ‘re talking about coming up with a standard “look and feel” expectation across all platforms and across many vendors. Then you need to start the education process for some 600 million (if you believe the numbers) users. Daunting task, no?
Second, there’s an assumption that the f.i.’s _care_. We’re imparting a risk tolerance of zero losses on the f.i. when the fact of the matter is, they may or may not be willing to write off losses from phishing.
Third, there’s an assumption that because it is possible that the user can be duped out of their username and password, that it will result in a successful compromise. As you say, Chris,
“The press is focusing on the Sitekey angle. The hook seems to be this: even when this highly-touted anti-phishing feature is absent (and a suspicious text box left in its place), people merrily supply their passwords. Therefore, Sitekey doesn’t help.”
I’ve had the pleasure of doing risk analysis for some large banks on this very FFIEC guidance and many are putting multiple controls in place. Also note that, put into practice, the sort of attack needed to be successful and creating a control failure in their scenario – while certainly possible, seems like a bit of work. Threat agents tend to be economic with their resources, and even using site key alone limits the overall willingness of most but the absolutely determined threat source.
I thought Srijith’s comment was interesting:
“…He had explicitly mentioned that the system is so vulnerable to people’s ignorant attitude that he for one did not see the ROI on the system. They had to still implement it because that was the ‘in’ thing!”
Jibes with some banks that I’ve worked with. The risk they have from Phishing losses may be tolerable, but the fines and judgments from the government they face from *not* buying a vendor solution amplifies their potential losses into an unacceptable range.
I happen to believe that one angle we the infosec community are forgetting is “risk transference”. These days we’re all enamored with free checking, and free online billpay. I’d be willing to pay a few bucks per month for the convenience if that money went as anti-phishing/identity theft insurance _as long as the insurer guaranteed timely replacement_.
I do believe that insurance and a few more controls added to products like SiteKey (defense in depth happens to be a good thing, some bank CISO’s I’ve had the pleasure of working with won’t stop talking about Cyota) won’t make the problem go away, but will make the potential impact tolerable for everyone involved: bank, consumer and government.
I’m glad to see some real experiments on this subject. The results from this one were consistent with my pre-existing guesses. My guess about the Pet Name Toolbar and Passpet is that they would protect users in almost all cases in a similar experiment. (Disclaimer: I’m biased, of course — I’m partially responsible for some of the ideas behind those tools.)
http://petname.mozdev.org/
http://passpet.org/
insurance company ratings