Taxpayers Stuck With Tab, but not in Seattle

In an article with absolutely no relevance for Seattle, the New York Times reports “With No Vote, Taxpayers Stuck With Tab on Bonds.” In another story to which Seattle residents should pay not attention, the city of Stockton is voting to declare bankruptcy, after risking taxpayer money on things like a … sports arena.

Of course, in Seattle, blah blah it’ll be so profitable, that it’ll make us a world class city while unlocking a stream of buzzwords and nonsense.

No, really. That seems to be the level of public discourse right now. The taxpayers of the region are being asked to pony up as much as 400 million bucks to help a hedge fund manager offload risk. That strikes me as doubly unwise. First, there’s lots of better ways we could allocate a possible $400 million dollars of spending. Second, when making a deal with a hedge fund manager to take risk, you should look for the sucker in the deal. It’s unlikely to be the hedge fund.

Will People Ever Pay for Privacy, Part XVI

Every now and then, a headline helps us see the answer to the question “Will people ever pay for Privacy?

A screen shot from the New York times with a headline of 33 acres of privacy for 65 million, and a picture of a sprawling house on a huge estate

Quoth the Paper of record:

The seclusion may be the biggest selling point of the estate belonging to Robert Hurst, a former executive at Goldman Sachs, which was just listed by Debbie Loeffler of the Corcoran Group for $65 million.

There’s more in the article.

A flame about flame

CNET ran a truly ridiculous article last week titled
“Flame can sabotage computers by deleting files, says Symantec”. And if that’s not goofy enough, the post opens with

The virus can not only steal data but disrupt computers by removing critical files, says a Symantec researcher.

ZOMG! A virus that deletes files! Now that is cutting edge technology! It’s shit articles like this that reifies the belief that the security industry in general and the AV industry in specific is filled with people who are completely out of touch with the rest of the world.

“These guys have the capability to delete everything on the computer,” Thakur said, according to Reuters. “This is not something that is theoretical. It is absolutely there.”

ProTip to Symantec and Reuters, viruses have been doing this since at least the 80s. Are you really that desperate for yet another story that this is the level that this is the sort of thing you feel is worthy of a press release and news article. How about you save that time and effort and instead focus on making a product that works better.

Breach Notification in France

Over at the Proskauer blog, Cecile Martin writes “Is data breach notification compulsory under French law?

On May 28th, the Commission nationale de l’informatique et des libertés (“CNIL”), the French authority responsible for data privacy, published guidance on breach notification law affecting electronic communications service providers. The guidance was issued with reference to European Directive 2002/58/EC, the e-Privacy Directive, which imposes specific breach notification requirements on electronic communication service providers.

In France, all data breaches that affect electronic communication service providers need to be reported [to CNIL], regardless of the severity. Once there is a data breach, service providers must immediately send written notification to CNIL, stating the following…

This creates a fascinating data set at CNIL. I hope that they’ll operate with a spirit of transparency, and produce in depth analysis of the causes of breaches and the efficacy of the defensive measures that companies employ.

Active Defense: Show me the Money!

Over the last few days, there’s been a lot of folks in my twitter feed talking about “active defense.” Since I can’t compress this into 140 characters, I wanted to comment quickly: show me the money. And if you can’t show me the money, show me the data.

First, I’m unsure what’s actually meant by active defense. Do the folks arguing have a rough consensus on what’s in and what’s out? If not, (or more) would be useful. Just so others can follow the argument.

So anyway, my questions:

  1. Do organizations that engage in Active Defense suffer fewer incidents than those who don’t?
  2. Do organizations that engage in Active Defense see smaller cost-per-incident when using it than when not? (or in comparison to other orgs?)
  3. How much does an Active Defense program cost?
  4. Is that the low cost way to achieve the better outcomes than other ways to get the outcomes from 1 & 2?

I’m sure some of the folks advocating active defense in this age of SEC-mandated incident disclosure can point to incidents, impacts and outcomes.

I look forward to learning more about this important subject.

Age and Perversity in Computer Security

I’ve observed a phenomenon in computer security: when you want something to be easy, it’s hard, and when you want the same thing to be hard, it’s easy. For example, hard drives fail at seemingly random, and it’s hard to recover data. When you want to destroy the data, it’s surprisingly hard.

I call this my law of perversity in computer security.

Today, Kashmir Hill brings a great example in “So which is it?”

Privacy online

Contradiction much? When it comes to the state of online privacy, the media tend to send mixed messages, but this is one of the more extreme examples I’ve seen.

It’s just perverse: it’s hard to be sure when someone wants to rely on the data to protect kids, but it’s easy (for marketing firms) when we prefer to remain private.

Future of Privacy Seeks Input

The Future of Privacy Forum (FPF) is an interesting mix of folks trying to help shape, well, the future of privacy. They have an interesting mix of academic and industry support, and a fair amount of influence. They’re inviting authors with an interest in privacy issues to submit papers to be considered for FPF’s third edition of Privacy Papers for Policy Makers.

The selected papers will be distributed to policy makers in Congress, federal agencies and data protection authorities internationally.

The Future of Privacy Forum (FPF) invites privacy scholars and authors with an interest in privacy issues to submit papers to be considered for FPF’s third edition of “Privacy Papers for Policy Makers.”

• To highlight important research and analytical work on a variety of privacy topics for policy makers
• Specifically, to showcase papers that analyze current and emerging privacy issues and either propose achievable short-term solutions, or propose new means of analysis that could lead to solutions.

For more info,

In the Spirit of Feynman

Did you notice exactly how much of my post on Cloudflare was confirmation bias?

Here, let me walk you through it.

In our continuing series of disclosure doesn’t hurt,

Continuing series are always dangerous, doubly so on blogs.

I wanted to point out Cloudflare’s “Post Mortem: Today’s Attack; Apparent Google Apps/Gmail Vulnerability; and How to Protect Yourself.”

See, I even own up to the bias here. I wanted to point out. Not here’s my analysis, not here’s a list of facts that we can gather, but here’s what I want to do.

So some takeaway actions:

Why these actions? Just because I’m an expert who’s been arrested by stormtroopers? That’s not a reason to listen to someone. (On the other hand, you should apply great skepticism to anyone who hasn’t been arrested by stormtroopers. Not because that arrest matters, but because you’ll end up a lot more skeptical, which is a good idea in general.)

Your takeaway from this post should not be to unsubscribe, but rather to apply the spirit of skepticism, to ask why, and to read a little bit more critically. Those techniques serve us well in every field we apply them to. We’ve tested them over and over, and found that they move fields forward. We can’t blindly expect the same in infosec. But we can reasonably think that a more scientific approach to our problems, including disclosing them and discussing them, will move us forward.

Thanks to Phil Venables for giving me a perfect Richard Feynman essay on which to talk about my own confirmation bias.

I invite you to look for such biases in your own work, and to talk about it. Admitting mistakes helps us learn from them.

Mozilla's Vegan BBQ

The fine folks at Mozilla have announced that they’ll be hosting a BBQ in Dallas to thank all their supporters. And the cool thing about that BBQ is it’s gonna be vegan by default. You know, vegan. No animal products. It’s good for you. It’s the right default. They’ll have dead cow burgers, but you’ll have to find the special line.

Obviously, I’m just kidding. Mozilla isn’t hosting a vegan BBQ in Dallas, but they are hosting one for your browsing privacy, by their choice for the “Do Not Track” (DNT) setting.

Poll after poll shows that people around the world prefer privacy, in the same sort of way they prefer cow burgers. This preference is stable, extending back decades, and being shown in nearly every poll. So why is Mozilla defaulting to not setting DNT?

Meanwhile, [some participants in] the W3C [working group are] is suggesting that the best we can possibly do is whenever you install a new browser, it goes through an Eliza-like process of interviewing you about weird technical settings, rather than having a great first-run experience.

Now it’s true, some people are ok with a tradeoff between what advertisers want (to trade content for ads) and what they want (privacy). Some advertisers go so far as to claim that there would be no content without ads, and they are, simply, flatly wrong. There is and will continue to be, content like this, which I hope you’re enjoying. I’ll draw to your attention that this blog is ad-free. We write because we have ideas we want to share. I’m sure that with fewer ads, we’d see less Paris Hilton ‘content’. But more importantly, the advertising industry is good at spreading messages. If they need DNT “off”, perhaps they could spread the message of why that’s a good thing for people, and, as is their wont and charter, convince people to make that change.

But the simple truth, known to the ad industry, the W3C and to Mozilla, is that most people prefer not to be tracked, in the same way most people prefer beef burgers. The “please let us track you” people have a hard message to spread, which is why they prefer to fight in relative obscurity over defaults.

Some additional background links: “Ad industry whines while privacy wonks waffle,” “Could the W3C stop IE 10’s Do Not Track plans?

I should be clear that my distaste at the idea of a vegan BBQ is mine. Even if my employer and I both prefer beef burgers, my opinions are mine, theirs are theirs, and I didn’t cook this blog up with them.

[Update: Clarified that I didn’t mean to imply the decision was that of the W3C as a whole.]

Feynman on Cargo Cult Science

On Twitter, Phil Venables said “More new school thinking from the Feynman archives. Listen to this while thinking of InfoSec.”

During the Middle Ages there were all kinds of crazy ideas, such
as that a piece of rhinoceros horn would increase potency. Then a
method was discovered for separating the ideas–which was to try
one to see if it worked, and if it didn’t work, to eliminate it.
This method became organized, of course, into science. And it
developed very well, so that we are now in the scientific age. It
is such a scientific age, in fact that we have difficulty in
understanding how witch doctors could ever have existed, when
nothing that they proposed ever really worked–or very little of
it did.

But even today I meet lots of people who sooner or later get me
into a conversation about UFOS, or astrology, or some form of
mysticism, expanded consciousness, new types of awareness, ESP, and
so forth. And I’ve concluded that it’s not a scientific world.

Details that could throw doubt on your interpretation must be
given, if you know them. You must do the best you can–if you know
anything at all wrong, or possibly wrong–to explain it. If you
make a theory, for example, and advertise it, or put it out, then
you must also put down all the facts that disagree with it, as well
as those that agree with it. There is also a more subtle problem.
When you have put a lot of ideas together to make an elaborate
theory, you want to make sure, when explaining what it fits, that
those things it fits are not just the things that gave you the idea
for the theory; but that the finished theory makes something else
come out right, in addition.

It’s excellent advice. Take a listen, and think how it applies to infosec.