MySpace sells for $35 Million, Facebook to follow

So MySpace sold for $35 million, which is nice for a startup, and pretty poor for a company on which Rupert Murdoch spent a billion dollars.

I think this is the way of centralized social network software. The best of them learn from their predecessors, but inevitably end up overcrowded. Social spaces change. You don’t hang out at the same bar you hung out with in college, and you won’t use the same social networks. Specialized networks like LinkedIn will likely fare better, as long as they stay focused on a core mission.

Ezra Klein says “killer app of Google+ is the ability to start your social network over w/benefit of years of Facebook experience.” I hate to say it, but that doesn’t strike me as a killer app like Lotus 1-2-3 did.

Phil Windley says “just realized G+ is using asymmetric follow.” I think this is right and important. “Friend” relationships are rarely perfect mirrors of each other, and the software asymmetric follow pattern is closer to the human patterns of friendship, respect and fandom.

I suspect that Google has gone further, and consciously built on those patterns with friend, family, acquaintance. That’s cool, and it’s a obvious outgrowth of Flickr’s default circles of friends and family, and adds making new circles easily.

So what does this mean for you?

First, it’s time to start thinking about leavingFacebook. Get your social network back in email where it belongs. Start trying to get your data out of Facebook’s databases before everything about you sells for pennies on the dollar.

If you’re a product manager for one of these things, you’re building on the happy dopamine releases we all get when we get positive social feedback. (That’s why Facebook only has a “Like” button.) You need to realize that the dopamine-release cycle requires bigger and bigger hits of wuffie over time. And the grimaces and hesitations add up. People remember the negatives for a long time. So the bad graph builds, and over time the happy graph drops away, and with it your eyeballs, minutes, options and stock options.

So finally, enjoy it while you can, Zuck.

Breach Harm: Should Arizona be required to notify?

Over at the Office of Inadequate Security, Pogo was writing about the Lulzsec hacking of Arizona State Police. Her article is “A breach that crosses the line?

I’ve been blogging for years about the dangers of breaches. I am concerned about dissidents who might be jailed or killed for their political views, abortion doctors whose lives are endangered from fringe elements, women who have tried to escape abusive spouses, porn actors whose families may be harassed by the publication of their names and addresses, confidential informants and law enforcement officers, and immigrants whose personal information was illegally revealed to law enforcement and to media by the actions of Utah state employees. All of those people have been put at risk of physical harm as a result of data breaches.

To date, what we know was taken from Arizona’s (apparently) insufficiently secured systems was names and addresses of people who have good reason to think they’re in danger from the release of that information.

I want to talk about four major risks here: The risk of harm, the risk of attributing all that risk to Lulzsec, the incentive to cover-up, and the risk of believing our analyses are complete.

The first risk, the risk of harm, Pogo covers fairly well. I have a cousin who works in a correctional facility. Their house, their phone, their cable, all these things are listed in the wife’s name, and I understand the fear of knowing that a real criminal thinks you’re at fault and knows where your family lives. I bring this up because it’s my family too, and that’s important because I’m about to discuss the apportionment of blame, and want to be clear that I’m doing so with some skin in the game.

The second risk is the risk of attributing all of the responsibility to Lulzsec. Some of the fault here is that of the State of Arizona Department of Public Safety (AZDPS). AZDPS made a decision to collect information. They had a responsibility to protect it. AZDPS also made a decision to store that information in electronic form. AZDPS made a decision to store that electronic information in an internet accessible fashion. AZDPS made decisions about computer security which, in hindsight, may be being reconsidered. However elite the ninjas of Lulzsec may or may not have been, however many lazer-eyed sharks they might have employed, if the information was only stored on paper in a locked room in Arizona, it would have been far more secure. And if Lulzsec could break in, potentially others have already broken in and stolen the data for purposes far more dangerous than embarrassing AZDPS. AZDPS is not unique in this set of choices. The organization reaps lots of benefits in putting the data online. Many of those benefits, such as speed and efficiency, are probably shared with employees, customers or citizens. All that said, Lulzsec did increase the risk by making the data widely available to anyone. (They also marginally decreased the risk by making people aware it’s out there, but the net risk is still increased.)

The third risk is the risk of cover-up. AZDPS is one of many organizations that collects information today. Like most of those organizations, AZDPS makes some investments in security to protect the data. I suspect that they make more investments than many others, since they know about the sensitivity of it and the many motivated attackers. Interestingly, their policy states that “Security methods and measures have been integrated into the design, implementation and day-to-day practices of the entire Azdps.gov web portal.” (AZDPS Privacy Policy (as of January 4, 2010, via the WayBack Machine)) which strikes me as a mature statement compared to the common “we follow industry-leading best practices in buying a firewall.” Most organizations that are hacked are not hacked by Lulzsec, and so may choose to cover up. AZDPS should investigate what went wrong, and share their analysis so others can learn from them.

The final risk is the risk of believing our analysis is complete. Much like I pointed out in “How the Epsilon Breach Hurts Consumers,” it’s easy to come to an analysis which misses important elements because the investigators have a defined scope. They are more likely to talk to those close to the system, and thus will be influenced by their perspectives and orientation. By sharing information about the breaches, different perspectives can emerge from a chaotic discussion. This is a perspective deeply influenced by Hayek. Unlike markets, information security lacks a pricing mechanism to help us bring all of the perspectives into a single sharp focus. It’s hard to add security to see what people will pay, and we lack good information about the inputs that led to breaches or other outcomes. Without that information, it’s hard to know what security is cost-effective, or appropriate in light of the duties that an information collector takes on by collecting data.

So to bring this together around those risks, the people whose data was exposed (first risk) were exposed in part because most organizations never issue a good report on what went wrong (the third risk) and so the choices made in collecting and storing data are made in an information vacuum (the second risk).

And so the Arizona DPS should take seriously their public safety mission. They should perform a deep investigation of what went wrong, and they should share it with the citizens of Arizona and people around the world. If they do so, and their counterparts do so, we’ll all be able to learn from each other’s mistakes, and we’ll all be able to, in that hated phrase “do more with less.”

That’s how public entities, operating with data about citizens, should be operating, and in my personal opinion, ought to be required to operate.

Goodbye, Rinderpest, we're probably better off without you

On Tuesday in a ceremony in Rome, the United Nations is officially declaring that for only the second time in history, a disease has been wiped off the face of the earth.

The disease is rinderpest.

Everyone has heard of smallpox. Very few have heard of the runner-up.

That’s because rinderpest is an epizootic, an animal disease. The name means “cattle plague” in German, and it is a relative of the measles virus that infects cloven-hoofed beasts, including cattle, buffaloes, large antelopes and deer, pigs and warthogs, even giraffes and wildebeests. The most virulent strains killed 95 percent of the herds they attacked.

But rinderpest is hardly irrelevant to humans. It has been blamed for speeding the fall of the Roman Empire, aiding the conquests of Genghis Khan and hindering those of Charlemagne, opening the way for the French and Russian Revolutions, and subjugating East Africa to colonization.

(“Rinderpest, Scourge of Cattle, Is Vanquished,” New York Times)

The full article is fascinating, and worth reading.

Sex, Lies & Cybercrime Surveys: Getting to Action

My colleagues Dinei Florencio and Cormac Herley have a new paper out, “Sex, Lies and Cyber-crime Surveys.”

Our assessment of the quality of cyber-crime surveys is harsh: they are so compromised and biased that no faith whatever can be placed in their findings. We are not alone in this judgement. Most research teams who have looked at the survey data on cyber-crime have reached similarly negative conclusions.

In the book, Andrew and I wrote “today’s security surveys have too many flaws to be useful as sources of evidence.” Dinei and Cormac were kind enough to cite that, saving me the trouble of looking it up.

I wanted to try here to carve out, perhaps, a small exception. I think of surveys as coming in two main types: surveys of things people know, and surveys of what they think. Both have the potential to be useful (although read the paper for a long list of ways in which they can be problematic.)

So there’s surveys of things people know. For example, what’s your budget, or how many people do you employ? There are people in an organization who know those things, and, starved as we are for knowledge, perhaps they would be useful to know. So maybe a survey makes sense.

But how many people Microsoft employs in security probably doesn’t matter to you. And the average of how many people Boeing, State Farm, Microsoft, Archer Daniels Midland, and Johnson & Johnson employ in security is even less useful. (Neighbors on the Fortune 500 list.) So even in the space that we might want to defend surveys, they’re not that useful.

So our desire for surveys is really evidence of how starved we are for data about outcomes and data about efficacy. We’re like the drunk looking for keys under the lamppost, not because we think the keys are there, but because there’s at least a little light.

So next time someone shows you a survey, don’t even bother to ask them what action they expect you to take, or what decision they expect you to alter, or ask them why you should accept what it says as acceptable arguments for that choice.

Rather, ask them to see the section titled “How we overcame the issues that Dinei and Cormac talked about.” It’ll save everyone a bunch of time.

Communicating with Executives for more than Lulz

On Friday, I ranted a bit about “Are Lulz our best practice?” The biggest pushback I heard was that management doesn’t listen, or doesn’t make decisions in the best interests of the company. I think there’s a lot going on there, and want to unpack it.

First, a quick model of getting executives to do what you want. I’ll grossly oversimplify to 3 ordered parts.

  1. You need a goal. Some decision you think is in the best interests of the organization, and reasons you think that’s the case.
  2. You need a way to communicate about the goal and the supporting facts and arguments.
  3. You need management who will make decisions in the best interests of the organization.

The essence of my argument on Friday is that 1 & 2 are often missing or under-supported. Either the decisions are too expensive given the normal (not outlying) costs of a breach, or the communication is not convincing.

I don’t dispute that there are executives who make decisions in a way that’s intended to enrich themselves at the expense of shareholders, that many organizations do a poor job setting incentives for their executives, or that there are foolish executives who make bad decisions. But that’s a flaw in step 3, and to worry about it, we have to first succeed in 1 & 2. If you work for an organization with bad executives, you can (essentially) either get used to it or quit. For everyone in information security, bad executives are the folks above you, and you’re unlikely to change them. (This is an intentional over-simplification to let me get to the real point. Don’t get all tied in knots? k’thanks.)

Let me expand on insufficient facts, the best interests of the organization and insufficient communication.

Sufficient facts mean that you have the data you need to convince an impartial or even a somewhat partial world that there’s a risk tradeoff worth making. That if you invest in A over B, the expected cost to the organization will fall. And if B is an investment in raising revenues, then the odds that A happens are sufficiently higher than B that it’s worth taking the risk of not raising revenue and accepting the loss from A. Insufficient facts is a description of what happens because we keep most security problems secret. In happens in several ways, prominent amongst them is that we can’t really do a good job at calculating probability or losses, and that we have distorted views of those probabilities or losses.

Now, one commenter, “hrbrmstr” said: “I can’t tell you how much a certain security executive may have tried to communicate the real threat actor profile (including likelihood & frequency of threat action)…” And I’ll say, I’m really curious how anyone is calculating frequency of threat action. What’s the numerator and denominator in the calculation? I ask not because it’s impossible (although it may be quite hard in a blog comment) but because the “right” values to use for those is subject to discussion and interpretation. Is it all companies in a given country? All companies in a sector? All attacks? Including port-scans? Do you have solid reasons to believe something is really in the best interests of the organization? Do they stand up to cross-examination? (Incidentally, this is a short form of an argument that we make in chapter 4 of the New School of Information Security, which is the book which inspired this blog.)

I’m not saying that hrbrmstr has the right facts or not. I’m saying that it’s important to have them, and to be able to communicate about why they’re the right facts. That communication must include listening to objections that they’re not the right ones, and addressing those. (Again, assuming a certain level of competence in management. See above about accept or quit.)

Shifting to insufficient communication, this is what I meant by the lulzy statement “We’re being out-communicated by people who can’t spell.” Communication is a two-way street. It involves (amongst many other things) formulating arguments that are designed to be understood, and actively listening to objections and questions raised.

Another commenter, “Hmmm” said, “I’ve seen instances where a breach occurred, the cause was identified, a workable solution proposed and OK’d… and months or years later a simple configuration change to fix the issue is still not on the implementation schedule.”

There are two ways I can interpret this. The first is that “Hmmm’s” idea of simple isn’t really simple (insofar as it breaks something else). Perhaps fixing the breach is as cheap and easy as fixing the configurations, but there are other, higher impact things on the configuration management todo list. I don’t know how long that implementation schedule is, nor how long he’s been waiting. And perhaps his management went to clown school, not MBA school. I have no way to tell.

What I do know is that often the security professionals I’ve worked with don’t engage in active listening. They believe their path is the right one, and when issues like competing activities in configuration management are brought up, they dismiss the issue and the person who raised it. And you might be right to do so. But does it help you achieve your goal?

Feel free to call me a management apologist, if that’s easier than learning how to get stuff done in your organization.

Are Lulz our best practice?

Over at Risky.biz, Patrick Grey has an entertaining and thought-provoking article, “Why we secretly love LulzSec:”

LulzSec is running around pummelling some of the world’s most powerful organisations into the ground… for laughs! For lulz! For shits and giggles! Surely that tells you what you need to know about computer security: there isn’t any.

And I have to admit, I’m taking a certain amount of pleasure in watching LulzSec. Whoever’s doing it are actually entertaining, when they’re not breaking the law. And even sometimes when they are. But at those times, they’re hurting folks, so it’s a little harder to chortle along.

Now Patrick’s argument is in the close, and I don’t want to ruin it, but I will:

So why do we like LulzSec?

“I told you so.”

That’s why.

The essence of this argument is that we in security have been telling management for a long time that things are broken, and we’ve been ignored. We poor, selfless martyrs. If only we’d been given the budget, we would have implemented a COBIT ISO27001 best practices program of making users leap through flaming hoops before they got their job done, and none of this would ever have happened. We here in the business of defending our organizations would love to have been effective, except we weren’t, and now we’re mother-freaking cheering a bunch of kids who can’t even spell LOL? Really? I told you so? Is that the best that we as a community will do?

Apparently.

We’re being out-communicated by folks who can’t spell.

Why are we being out-communicated? Because we expect management to learn to understand us, rather than framing problems in terms that matter to them. We come in talking about 0days, whale pharts, cross-site request jacking and a whole alphabet soup of things whose impact to the business are so crystal clear obvious that they go without saying.

And why are we being out-communicated? Because every time there’s a breach, we cover it up. We claim it wasn’t so bad. Or maybe that the poor, hapless American citizen will get tired of hearing about the breaches. And so we’re left with the Lulz crowd breaking and entering for shits and giggles to demonstrate that there are challenges in making things secure.

I don’t mean to sound like a broken record, but maybe we should start talking openly about breaches instead. Maybe then, we’d get somewhere without needing to see Sony, PBS, and Infraguard attacked. Heck, maybe if we talked about breaches, one or more of those organizations would have learned from the pain of others.

Nah.

Let’s just wait for “the world’s leaders in high-quality entertainment at your expense” to let us say I told you so.

It sure is easier than admitting our communications were sub-par.

[Thanks for the many good comments! I’ve written a follow-up post on the topic of communication, “Communicating with Executives for more than Lulz.”]

How the Epsilon Breach Hurts Consumers

Yesterday, Epsilon and Sony testified before Congress about their recent security troubles. There was a predictable hue and cry that the Epsilon breach didn’t really hurt anyone, and there was no reason for them to have to disclose it. Much of that came from otherwise respectable security experts. Before I go on, let me give kudos to Epsilon for coming clean, because, in fact, the breach does hurt me. I want to explain both how it hurts me, and how covering it up would compound that harm. To understand, let me explain part of..

How I protect myself against phishing

I do a variety of things to protect myself from phishing attacks, including bookmarking my banking web sites, and setting up special email addresses that are only given to a single business. For example, Capital One (an Epsilon customer) might think that my email is cap1-814406d6fa52c5317aa@example.com. Now, any email that comes to that address has some special properties. I know that either it came from the expected sender, or there’s been a breach of confidentiality at the sender, the intervening network, or on my systems. In that sense, these addresses are honeytokens. In many instances, the entities I’m working with use opportunistic TLS for email, and so I can be confident that it’s not passive sniffing of the intervening network. Now, since I have technology that makes it easy to see if an email went to the right address, I can ignore most emails claiming to be from financial institutions. I save a great deal of time and energy that way. But for the emails that come into the special mailboxes, I can also save time by going with the assumption that they’re ok, because this defense tends to work.

How the breach hurts me

Since Epsilon was good enough to bring up the breach, and Capital One was good enough to contact their customers, I’m aware that my defenses are less strong than they otherwise would be. I have to expend more energy than I otherwise would have reading URLs in messages in that folder. If the breach had been concealed, then I would be naively vulnerable. I would be vulnerable because respectable experts hadn’t thought about this scenario, and had naively decided that I didn’t need to know about the incident.

The Limits of Expertise

This is one example of the limits of experts to understand the impact of breaches on consumers. There are doubtless others, and we should be willing to question the limits of our expertise to fully understand the impacts of breaches on everyone. We should also question our expertise to decide for them what’s best for others.

Breach Fatigue?

Now, some people will argue that there’s “breach fatigue”, and that that means we should select for others which incidents they’ll hear about and which they won’t. While I agree that there’s breach fatigue, that’s a weak argument in a free society. People should be able to learn about incidents which may have an effect on them, so that they (and I) can make good risk management decisions. We don’t argue against telling people that there’s lead paint on Chinese toys even though much of the damage will already have been done by the paint that’s flaked off. We don’t argue against telling the public about stock trades by insiders even though only a few experts look at it. We as a free society encourage the free flow of information, knowing that it will be put to a plethora of uses.

This is just one of the many reasons why I support broad breach notification.

There are some technical details after the break. Continue reading “How the Epsilon Breach Hurts Consumers”

ThreatPost goes New School

In “It’s Time to Start Sharing Attack Details,” Dennis Fisher says:

With not even half of the year gone, 2011 is becoming perhaps the ugliest year on record for major attacks, breaches and incidents. Lockheed Martin, one of the larger suppliers of technology and weapons systems to the federal government, has become the latest high-profile target of a serious attack, and while such incidents are bad news indeed for the victims, they may serve a vital purpose in forcing companies to disclose more data about breaches and attacks.

I’m glad to see that the data sharing message is spreading, and I look forward to seeing RSA and Lockheed releasing VERIS or CAPEC-coded descriptions of what happened.