Author: Chandler

Engineers vs. Scammers

Adam recently sent me a link to a paper titled, “Understanding scam victims: seven principles for systems security.”  The paper examines a number of real-world (i.e. face-to-face) frauds and then extrapolates security principles which can be applied generically to both face-to-face and information or IT security problems.

By illustrating these principles with examples taken from the British television series The Real Hustle, they provide accessible examples of how reasonable, intelligent people can be manipulated into making extremely poor (for them) decisions and being defrauded as a result. Perhaps you’re thinking, “That’s all well and good, but what do I care about other people getting scammed due to behaviors that I would like to believe I would never engage in?”

It’s a nice idea, and while it may hold true with some of them, such as the Dishonesty Principle (“It’s illegal, that’s why you’re getting such a good deal.”), many of these scams work not because the person is trying to do something sneaky, because they’re either trying to “Do The Right Thing,” e.g. The Social Compliance Principle, just get the job done, such as the Distraction Principle, or just plain get fooled, e.g. the Deception Principle,

So, like it or not, the paper would seem to tell us, you’re Damned if You Do, Damned If You Don’t, and eventually you’ll let your guard down at just the wrong time. The sub-title of the paper might as well have been, “Why your security system will eventually fail, no matter how good it is.”

So what’s the point of trying?

Well, first off, because all hope is not lost—even if you don’t read the paper, there are a number of points to consider, two of which I want to call out because they are essential to designing or analyzing a security system (or, really, a system which requires a degree of security):

• Identify the cases where the security of your system depends on an authentication task (of known people, of unknown people or of “objects”, including cash machines and web sites) performed by humans.

• Understand and acknowledge that users are very good at recognizing known people but easily deceived when asked to “authenticate”, in the widest sense of the word, “objects” or unknown people.

By understanding how and why systems fail, we can design them in such a manner as to avoid the failure. For example, never forget that authentication is really a two-way street, even most people are generally bad (at best) and oblivious (in general) to their role in the problem.

In the case of an ATM, the traditional security efforts are around protecting the ATM from malicious users. The fact that the users must also be protected from malicious ATM’s never seems to come up. Likewise, phishing and other forms of credential harvesting depend on the victim being unable to accurately authenticate the requester of their credentials, whether due to falling prey to Distraction, Deception, or Social Compliance.

By understanding this and explicitly forcing that problem to be considered “in-scope” by the system designers, we accomplish two important security goals. First, we address the fact that authentication is a two-way street, even if only it only a formal process (e.g. logging in) in one direction. Second, we expand the pool of people working on solving the problem and thus potentially creating a valuable innovation which can be applied across that problem elsewhere.

What we can’t do is take the easy way out and “blame the users.” In fact, the authors even close their paper by reminding us of this fact:

Our message for the system security architect is that it is naïve and pointless just to lay the blame on the users and whinge that “the system I designed would be secure if only users were less gullible”; instead, the successful security designer seeking a robust solution will acknowledge the existence of these vulnerabilities as an unavoidable consequence of human nature and will actively build safeguards that prevent their exploitation.

While Adam, Alex and I were discussing the paper, Adam took the bold step of declaring that,

The principles that tell an engineer what to do are better than those that tell a scammer what to do.

Personally, I’m going to confess that while I don’t disagree with his statement, I don’t think it matters, either. Engineering principles can help us make better decisions and design better systems, but unfortunately, they don’t let us make perfect decisions or design perfect solutions. Fortunately, even if they did we’d still have innovation elsewhere to create new and interesting problems which people would employ us to solve.

Regardless, there will always be an interplay of innovation and reaction on both sides of the equation—for scammers or attackers and for security engineers or defenders*. The attackers find a hole, the defenders find a way to close it or render it ineffective. So the attackers find a new hole, ad infinitum. The hole can be a weakness in an IT system, a business process, or, as is the case in many of the examples in Part One, the human brain.

Here’s where it gets tricky, though. Most defenders work for Someone Else. By that, I mean that they are employees of, either directly or by contract, of some other entity who is in the business of Getting Something Done. The entity is typically not in the business of Securing Things. Thus, the Distraction Principle is already working against us before we ever even get to work in the morning.

Next, once the asset is not something obviously valuable, such as money, people’s ability to recognize it as such fails rapidly, especially when they interact with it on a day-to-day basis. This is especially true when trying to protect trade secrets and other Intellectual Assets. I have been in meetings where Very Senior people were asked to identify the critical secrets in their branch of the organization and they were unable to do so. It’s not that they didn’t have any, it’s that they couldn’t pick them out of the crowd of their responsibilities because everyone they dealt with was also authorized to see them, so they had no recurring reminder or other filtering mechanism.  This is one of the reasons that Top Secret documents are stamped “Top Secret.”

In the examples from the paper, scammers utilize the opposite form of this problem in their principle of Deception, by convincing people that valueless things are actually, in fact, valuable—fake diamond rings, TV boxes with rocks in them, etc. In the corporate world, people don’t know, understand or remember what’s valuable, and thus are unable to properly prioritize protecting it among their other responsibilities (the Distraction Principle again).

Thus, I would argue that the issue is that people are just bad at accurately assessing value; that their ability to do so degrades over time; and that their ability is further weakened when scammers manipulate them.  Call it the Valuation Principle. This, in turn, makes them more vulnerable to a variety of ways of losing their valuables, be it cash, a car, or a Trade Secret, by application of the other Principles in the paper.

The challenge is that while it’s irrational to protect everything if only a small portion of the assets need the highest level of protection, most people (and thus, their organizations) are really bad at determining what level of security an asset actually requires (even with tools like classification and risk assessment).  Cost-effective Information Protection is as much about determining what to protect as ensuring that it’s protected.

* I assign these roles under the assumption that the defender holds an asset that an attacker wants to access or possess.

Rational Ignorance: The Users' view of security

Cormac Herley at Microsoft Research has done us all a favor and released a paper So Long, And No Thanks for the Externalities:  The Rational Rejection of Security Advice by Users which opens its abstract with:

It is often suggested that users are hopelessly lazy and unmotivated on security questions. They chose weak passwords, ignore security warnings, and are oblivious to certi cates errors. We argue that users’ rejection of the security advice they receive is entirely rational from an economic perspective.

And you know it’s going to be good when they write:

Thus we find that most security advice simply offers a poor cost-benefit tradeoff to users and is rejected.  Security advice is a daily burden, applied to the whole population, while an upper bound on the benefit is the harm suffered by the fraction that become victims annually.  When that fraction is small, designing security advice that is beneficial is very hard.  For example, it makes little sense to burden all users with a daily task to spare 0.01% of them a modest annual pain.

People are not stupid.  They make what we, as relative experts on the topic of security, perceive to be bad decisions, but this paper argues that their behavior is rational.

[W]e argue for a third view, which is that users’ rejection of the security advice they receive is entirely rational from an economic viewpoint.  The advice o ers to shield them from the direct costs of attacks, but burdens them with increased indirect costs, or externalities. Since the direct costs are generally small relative to the indirect ones they reject this bargain. Since victimization is rare, and imposes a one-time cost, while security advice applies to everyone and is an ongoing cost, the burden ends up being larger than that caused by the ill it addresses.

The paper provides both a good and accessible overview of externalities and rational behavior using spam as an example.

For example, Kanich et al. [32] document a campaign of 350 million spam messages sent for $2731 worth of sales made. If 1% of the spam made it into in-boxes, and each message in an inbox absorbed 2 seconds of the recipient’s time this represents 1944 hours of user
time wasted, or $28188 at twice the US minimum wage of $7.25 per hour.

Coincidentally, we get a little over 300 million spam messages into our corporate email gateways every month, which means that I can compare the cost-per-delete-click (at $7.25/hour) against the cost of our corporate spam filtering contract without having to do any real math.  Since we pay about $50,000/month for filtering.  That means that we’re getting a pretty good deal, since our white-collar employees cost over $14/hour.

That’s just time that would be spent seeing and deleting the message, don’t forget.  Fourteen Dollars per hour completely ignores the cost of attention disruption (much more than two seconds) and the Direct Losses, either because I cannot quantify, which causes the entire argument to appear specious in the eyes of  Senior Leadership, or I am not at liberty to disclose enough detail to pass the “cannot quantify” test.

They then go on to document in fairly accessible models why password complexity, anti-phishing awareness, and SSL Errors are cost-inefficient, and get into a favorite topic of mine, the difficulty of defining security losses or the benefit from adding safeguards at the end-user level.  This section should be mandatory reading for any security person who attempts to talk to non-security people about the topic–i.e. all of us.

What’s missing from the paper, though, is the next logical step of analysis, the appropriate Risk Management strategy in response to the information presented. Hopefully that will be the follow-on paper, because as it was, it felt like a bit of a cliff-hanger to me.  All of the discussion assumes that mitigation is the only option.  This may feel right from a Security perspective, but it’s probably not the correct risk management decision.

To manage the risk in these cases, though, I see a strong argument for risk transfer.  High-Impact, Low-Likelihood events are best managed by aggregating the risk into a pool and spreading the cost across the pool, i.e. buying insurance against these losses.  If you could buy anti-phishing insurance for $1/person/year (which, realistically, is multiples of what it could cost if 200 million people all bought in) rather than throwing large, uncoordinated piles of money at ineffective awareness training or technical countermeasures which will probably be out-innovated by the attackers in hours or days, why wouldn’t you?

Why have anti-virus vendors not thought of this?  If your AV vendor said they would also insure you against Direct Losses (having your bank account cleaned out) for your $50/year subscription, would that differentiate them enough to win your business?

By all means, we should continue to work on the challenges of improving the security experience and reducing the risk of using computers.  More accurately, though, we should be reducing the amount that must be experienced by users at all to improve security of their information and transactions.

Cures versus Treatment

A relevant tale of medical survival over at The Reality-Based Community:

Three years ago a 39-year-old American man arrived at the haematology clinic of Berlin’s sprawling Charité hospital. (The venerable Charité, one of the great names in the history of medicine, used to be in East Berlin, but it’s now the brand for the merged university hospitals of the whole city.) He had both leukaemia and HIV; you wouldn’t have given much for his chances. Now he has neither. How?

The great cancer researcher and medical writer Lewis Thomas wrote this in 1983 (The Youngest Science, endnote to page 175). The context is his stint as an adviser on health policy in Lyndon Johnson’s White House in 1967:

We recognised three levels of medical technology:

(1) genuine high technology, exemplified by Salk and Sabin poliomyelitis vaccines, which simply eliminated a major disease at very low cost by providing protection against the three strains of virus known to exist;

(2) “halfway” technology, applied to the management of disease when the underlying mechanism is not understood and when medicine is obliged to do whatever it can to shore things up and postpone incapacitation and death, at whatever cost, usually very high indeed, illustrated by open-heart surgery, coronary artery bypass, and the replacement of damaged organs by transplanting new ones…

and (3) nontechnology, the kind of things doctors do when there is nothing at all to be done, as in the case of patients with advanced cancer and senile dementia.We suggested that the rising cost of health care was resulting from efforts to treat diseases of the halfway or nontechnology class, and recommended that basic research on these ailments be sponsored by NIH.

Thomas’ analysis still looks spot on to me. But his optimism has so far not proved justified: the billions poured into medical research ever since have led to many improved treatments but disappointingly few cures. The ideal state for Big Pharma is represented by the state of the art on diabetes and HIV: costly lifelong treatments. For most lethal conditions, we don’t have even that.

Information Security also seems to be stuck in the “halfway” technology mode.  We treat the symptoms by patching and deploying security products to prolong survival, but as of yet, there is no cure.

In most organizations, it’s even worse.  Lack of basic knowledge and awareness, lack of funding and/or misplaced risk tolerance produce more nontechnology, such as Business Acceptance of Risk Forms and Security Dashboards where vanity metrics provide CYA.

Even when we know what the Right Things to reduce a risk are, whether turning off the TV, eating right and getting some exercise or removing admin rights and keeping crapware off the machines, we as a society and as companies all-to-rarely seem to have the will to make it happen.

To twist a line from Dean Wormer, “Fat, dumb and pwned is no way to go through life, son.”

Death-related items

I’m cleaning out my pending link list with couple morbidly-thematic links.

Old-but-interesting (2007 vintage) list of relative likelihoods of death compared to dying in a terrorist attack.  For example…

You are 1048 times more likely to die from a car accident than from a terrorist attack

You are 12 times more likely to die from accidental suffocation in bed than from a terrorist attack

You are nine times more likely to choke to death on your own vomit than die in a terrorist attack

You are eight times more likely to be killed by a police officer than by a terrorist

I know that Jimi Hendrix might argue that the risk of death-by-choking-on-vomit cannot be overstated enough, but everybody gets disproportionately worked up about something.

Of course, given that death is inevitable (in the long run, anyway), Cory Doctorow challenges us with the question of what will happen to our crypto keys when we die?

What do you-all do with your cryptokeys? Keep ’em with a lawyer and hope that attorney-client privilege will protect them? Safe-deposit box? Friends? Under the mattress? Do you worry that if your friends have your keys, they can be subpoenaed or suborned?

I seriously don’t have a good answer to this question for my personal keys.  How about the rest of you?

(corrected spelling as noted in comments)

Green Dam

Update 26 June 2009: The status of Green Dam’s optionality is still up in the air.  See, for example, this news story on PC makers’ efforts to comply, which points out that

Under the order, which was given to manufacturers in May and publicly released in early June, producers are required to pre-install Green Dam or supply it on disc with every PC sold in China from July 1.

Last week, it appeared the government backed away from requiring compulsory installation by users, but manufacturers are still being required to provide the software.

I suspect that there will be at least one more update to this post before all is said and done.

Update 17 June 2009Green Dam is now to be optional, but installed-by-default.

There’s a great deal of discussion in China right now about the new government-mandated “Green Dam” Internet filtering software that must be installed on all PC’s in the People’s Republic of China

Every PC in China could be at risk of being taken over by malicious hackers because of flaws in compulsory government software.

The potential faults were brought to light by Chinese computer experts who said the flaw could lead to a “large-scale disaster”.

The Chinese government has mandated that all computers in the country must have the screening software installed.

It is intended to filter out offensive material from the net.

I was in a taxi in Beijing a couple days ago and the driver was listening to a call-in/talk radio show whose topic was the software and its flaws/weaknesses.  My post, however, had to wait until I returned states-side due to this ‘blog being blocked by the three different connections I tried to access it while I was in China.

The consensus about this software among the locals that I spoke to is that it will be widely ignored, except in places like primary schools and some government offices.

There is so much to say about this, however, that I almost don’t know where to begin.  First, there is the issue of externalities.  The benefit from this software are the government censors.  The cost, however, will be borne by those whose machines are rendered less stable, less secure, and less useful (due to the censoring).  This is the opposite of the theoretical goal of regulation–the transfer externalities back onto their creators, not the other way around.

The results here may be even more toxic than observers currently realize, however.  By demanding compliance even when it does direct harm to those who must comply, the government undermines the loyalty of the citizenry and its own credibility.  It may only be one straw on the camel of Chinese citizens’ discontent, but eventually, there will be a straw that breaks the camel’s back.  This software has re-energized the domestic debate over the role of government censorship and whether their goal is to keep the populace safe or merely in-line.

Similarly, there is a lesson here for security and risk managers.  Namely, policies must also be perceived as benefiting those they govern.  Corporations whose policies are too obviously unfair or which demonstrate a contempt for employees produce similar disloyalty.  While it may not be immediately obvious in the current job market–people generally won’t quit in protest if they can’t find another job–that makes the effect worse.  A grumbling workforce is an unproductive workforce.

Yes, we must achieve our goals, in my case protecting information, and the combination of reduced budgets and nervous employees makes it that much harder to achieve results.  But in times like these, we also need to tread more lightly than ever since the resisters of policy–those employees who are more likely to be a risk–are more likely to stay with us and undermine it from within.

So, as ever, when we are dealing with security, the mantra remains, “People, process and technology–in that order.” Any attempt to attack the problem otherwise frequently produces unintended–and often


I don’t know what my employer’s corporate stance is going to be, but we have a significant white collar presence in China, so will probably be unable to ignore the problem.

When asked, I will argue that we already perform this filtering on our corporate proxy servers, but it does not change the fact that the government has created a huge externality for their population and for companies operating here as part of a futile attempt to prevent Chinese citizens from viewing porn or dissident political commentary–not necessarily in that order, IMHO.

The Art of Living Dangerously

I haven’t had a chance to read it, but I’ll probably pick up “Absinthe and Flamethrowers: Projects and Ruminations on the Art of Living Dangerously” at some point, if only because of the author’s writing on the relationship between risk and happiness says something I’ve always suspected, that risk takers are happier than risk avoiders anyway:

Psychologists can assess and numerically describe a person’s risk-taking proclivity. Risk-taking behavior can be summarized as a single number from one to 100. A one is a house-bound agoraphobe and a 100 is a heroin junkie with a death wish. The distribution of risk-taking proclivity is described by a normal, bell-shaped curve. Not surprisingly, most people cluster around the mean score, as the graph shows. BB golden-third1-.jpg

But here’s the cool thing. I found that moderate, rational, risk takers, that is, those with scores between the mean and one standard deviation to the right are the people who are most satisfied with their lives. I call that area “the golden third” because it’s roughly 1/3 of the population. Studies (and there are several) show that people who take just a bit more risks than average, that is, those who live their lives in the golden third, tend to do better than average. They tend to be more satisfied with their lives and more fulfilled. To me, that’s a stunning conclusion.

Pirates, Inc.

I found this short documentary about piracy around the Straits of Malaca to be an interesting view of the reality of pirate life as a last refuge of the unemployed fisherman to be an interesting counterpoint to the NPR Story, “Behind the Business Plan of Pirates, Inc.” which provides an altogether different view of the economics of Somali piracy.

But the issues of criminality and the potential for violence aside, a closer look at the “business model” of piracy reveals that the plan makes economic sense.

A piracy operation begins, as with any other start-up business, with venture capital.

J. Peter Pham at James Madison University says piracy financiers are usually ethnic Somali businessmen who live outside the country and who typically call a relative in Somalia and suggest they launch a piracy business. The investor will offer $250,000 or more in seed money, while the relative goes shopping.

“You’ll need some speedboats; you’ll need some weapons; you also need some intelligence because you can’t troll the Indian Ocean, a million square miles, looking for merchant vessels,” says Pham, adding that the pirates also need food for the voyage — “a caterer.”

Yes, a caterer.

“Think of it as everything you would need to go into the cruise ship business,” Pham says. “Everything that you would need to run a cruise ship line, short of the entertainment, you need to run a piracy operation.”

The article goes on to describe all of the other ways in which modern-day piracy is like pretty much any other business–everything from timesheets to charts of accounts to contracts and professional negotiators/lawyers.

These two stories, in turn, highlight something that is consistently overlooked in discussions of “what to do about Criminal Enterprise X,” the fundamental economic drivers of crime, whether it is a physical and relatively universally-agreed crime  such as piracy on the high seas or much more abstract and disputed, such as electronic fraud or software, movie and music “piracy” on the Internet.

Continue reading