Credit Scores and Deceptive Advertising

Frank Pasquale follows a Joe Nocera article on credit scores with a great roundup of issues that the credit system imposes on American citizens, including arbitrariness, discriminatory effects and self-fulfilling prophecies. His article is worth a look even if you think you understand credit scores.

I’d like to add one more danger of credit scores: deceptive advertising. The way it works is that a bank advertises a great rate for those with “perfect credit.” What it doesn’t advertise is what the curve of credit scores versus rates looks like. There are two issues here. The first is that the market is inefficient, as figuring out what actual rates are often involves talking to a human, and usually disclosing enough personal information to make a fraudster drool. Inefficient markets favor the side with more information (the loan offerer) and lead to less trade than more transparent markets.

The second issue is that everyone is mislead by the headline rate. I’ve looked for data on what fraction of Americans are listed as having “perfect credit” or data on the distribution of interest rates people are really paying, and I’ve been unable to find it. For publicly traded companies, it’s sometimes possible to reverse engineer some of this, but not very much.

Hacker Hide and Seek

Core Security Ariel Waissbein has been building security games for a while now. He was They were kind enough to send a copy of his their “Exploit” game after I released Elevation of Privilege. [Update: I had confused Ariel Futoransky and Ariel Waissbein, because Waissbein wrote the blog post. Sorry!] At Defcon, he and his colleagues will be running a more capture-the-flag sort of game, titled “Hide and seek the backdoor:”

For starters, a backdoor is said to be a piece of code intentionally added to a program to grant remote control of the program — or the host that runs it – to its author, that at the same time remains difficult to detect by anybody else.

But this last aspect of the definition actually limits its usefulness, as it implies that the validity of the backdoor’s existence is contingent upon the victim’s failure to detect it. It does not provide any clue at all into how to create or detect a backdoor successfully.

A few years ago, the CoreTex team did an internal experiment at Core and designed the Backdoor Hiding Game, which mimics the old game Dictionary. In this new game, the game master provides a description of the functionalities of a program, together with the setting where it runs, and the players must then develop programs that fulfill these functionalities and have a backdoor. The game master then mixes all these programs with one that he developed and has no backdoors, and gives these to the players. Then, the players must audit all the programs and pick the benign one.

First, I think this is great, and I look forward to seeing it. I do have some questions. What elements of the game can we evaluate and how? A general question we can ask is “Is the game for fun or to advance the state of the art?” (Both are ok and sometimes it’s unclear until knowledge emerges from the chaos of experimentation.) His blog states “We discovered many new hiding techniques,” which is awesome. Games that are fun and advance the state of the art are very hard to create. It’s a seriously cool achievement.

My next question is, how close is the game to the reality of secure software development? How can we transfer knowledge from one to the other? The rules seem to drive backdoors into most code (assuming they all work, (n-1)/n). That’s unlike reality, with a much higher incidence of backdoors than exist in the wild. I’m assuming that the code will all be custom, and thus short enough to create and audit in a game, which also leads to a higher concentration of backdoors per line of code. That different concentration will reward different techniques from those that could scale to a million lines of code.

More generally, do we know how to evaluate hiding techniques? Do hackers playing a game create the same sort of backdoors as disgruntled employees or industrial spies? Because of this contest and the Underhanded C Contests, we have two corpuses of backdoored code. However, I’m not aware of any corpus of deployed backdoor code which we could compare.

So anyway, I look forward to seeing this game at Defcon, and in the future, more serious games for information security.

Previously, I’ve blogged about the Underhanded C contest here and here

SOUPS Keynote & Slides

This week, the annual Symposium on Usable Privacy and Security (SOUPS) is being held on the Microsoft campus. I delivered a keynote, entitled “Engineers Are People Too:”

In “Engineers Are People, Too” Adam Shostack will address an often invisible link in the chain between research on usable security and privacy and delivering that usability: the engineer. All too often, engineers are assumed to have infinite time and skills for usability testing and iteration. They have time to read papers, adapt research ideas to the specifics of their product, and still ship cool new features. This talk will bring together lessons from enabling Microsoft’s thousands of engineers to threat modeling effectively, share some new approaches to engineering security usability, and propose new directions for research.

A fair number of people have asked for the slides, and they’re here: Engineers Are People Too.

Survey Results

First, thanks to everyone who took the unscientific, perhaps poorly worded survey. I appreciate you taking time to help out.  I especially appreciate the feedback from the person who took the time to write in:

“Learn the proper definition of “Control Systems” as in, Distributed Control Systems or Industrial Control systems. These are the places that need real security, not some bullshit enterprise network.”

You, sir or madam, are chock full of rock and roll.  Thanks for cheering me up.

Next, the results were:

Daily = 6
a few times a month = 2
a few times a quarter  = 1
less than a few times a quarter  = 10
never  = 43

and the chart looks something like this:

UPDATE:  Jeff Lowder asked me to clarify this a bit.  I’ll start by re-iterating that this was a not really a proper survey, but akin to asking a handful of friends (the survey existence was announced here, on twitter, to a couple of security – centric mailing lists).  As such, don’t get all bent out of shape about it.

I was interested in the question – “how often does GRC analysis impact actual OpSec?” and decided that a frequency of interaction would be a pretty good bellwether.  The question (and results with proper caveats) were part of the presentation Allison Miller and I gave at Black Hat.  More on that presentation in a while, btw.

A Blizzard of Real Privacy Stories

Over the last week, there’s been a set of entertaining stories around Blizzard’s World of Warcraft games and forums. First, “World of Warcraft maker to end anonymous forum logins,” in a bid to make the forums less vitriolic:

Mr Brand said that one Blizzard employee posted his real name on the forums, saying that there was no risk to users, and the experiment went drastically wrong. “Within five minutes, users had got hold of his telephone number, home address, photographs of him and a ton of other information,” said Mr Brand.

The customers apparently really liked their privacy, and “Blizzard backs off real-name forum mandate.” Which, you’d think, would end the uproar. But no. This morning, “Gamers Who Complained About Blizzard’s Forum Privacy See Email Addresses Leaked” by the Entertainment Software Rating Board. Interestingly, the ESRB Online Privacy Policy is one of the few that does not start “your privacy is important to us.” Who knew that line was important?

The key lesson is that your customers think about identity differently than you do, and trying to add it to a system is fraught with risk. (Don’t even get me started on the jargon “identity provider.”)