Author: Chandler


The Electronic Frontier Foundation has published a report on the State of HTTPS Security that promises to be the first in a series and is well worth reading on its own.

The TL;DR version:  HTTPS adoption is growing rapidly, but the current system, especially the Certificate Authorities, has much room for improvement before it actually delivers the level of security that HTTPS implies.

Also, if you’re on Firefox and don’t already use it, EFF’s HTTPS Everywhere Plug-In is now officially 1.0.

Some random cloudy thinking

Thanks to the announcement of Apple’s iCloud, I’ve been forced to answer several inquiries about The Cloud this week.  Now, I’m coming out of hiding to subject all of you to some of it…

The thing that you must never forget about The Cloud is that once information moves to The Cloud, you’ve inherently ceded control of that information to a third party and are at their mercy to protect it appropriately–usually trusting them to do so in an opaque manner.

What does that mean?  Well, start with the favored argument for The Cloud, the mighty “cost savings.”  The Cloud is not cheaper because the providers have figured out some cost savings magic that’s not available to your IT department.  It’s cheaper because their risk tolerances are not aligned to yours, so they accept risks that you would not merely because the mitigation is expensive.

Argument #2 is that it’s faster to turn up capability in the cloud–also a self-deception.  For anything but the most trivial application, the setup, configuration, and roll-out is much more time consuming than the infrastructure build.  Even when avoiding the infrastructure build produces non-trivial time savings, those savings are instead consumed by contract negotiations, internal build-vs-rent politics and (hopefully) risk assessments.

Finally, The Cloud introduces a new set of risks inherent in having your information in places you don’t control.  This morning, for example, Bruce Schneier again mentioned the ongoing attempts by the FBI to convince companies like Microsoft/Skype, Facebook and Twitter to provide backdoor access to unencrypted traffic streams from within their own applications.  These risks are even more exaggerated in products where you’re not the customer, but rather the product being sold (e.g. Facebook, twitter, Skype, etc.).  There, the customer (i.e. the Person Giving Them Money) is an advertiser or the FBI, et. al.  Your privacy interests are not (at least in the eyes of Facebook, et. al.) Facebook’s problem.

For those of you that like metaphors, in the physical world, I don’t (usually) choose to ride a motorcycle without full safety gear (helmet, jacket, pants, gloves, boots, brain).  I do, however, drive a car with only a minimum of safety gear (seatbelt, brain) because the risk profile is different.  In the Information Security world, I don’t usually advocate putting information whose loss or disclosure would impact our ability to ship products or maintain competitive advantage in the cloud (unless required by law, a Problem I Have) for the same reason.

That’s not to say that I’m opposed to the cloud–I’m actually a fan of it where it makes sense.  That means that I use the cloud where the utility exceeds the risk.  Cost is rarely a factor, to be honest.  But just like any other high-risk activities I engage in, I think it’s important to make honest, informed choices about the risks I’m accepting and then act accordingly.

A Few Data Points

First, for those who might have missed it, Google has released Google Refine, a free tool for cleaning dirty data sets.  It allows you to pull in disparate data, then organize and clean it for consistency.

Next, some interesting thoughts on how “anonymized” data sets aren’t, and some thoughts on the implications of this from a risk perspective.  None of this is groundbreaking, but it’s nice to see some sane thinking about two facts that aren’t going away, no matter how much people might like them:  that data will continue to be accumulated and that it will be shared with varying levels of consideration for the risks of doing so.

Finally, yet another real-world example of risk homeostasis at work:  People who take vitamins make poorer health decisions in other areas.  Based on the number of times I’ve been asked questions along the lines of, “I don’t need to worry about x because I’ve {patched|installed anti-virus|switched to Apple|etc.}, right?” I’d say this still holds true for computing, too.

Now if you’ll excuse me, I have to go clean my anonymized data set so I can share it far and wide, which is OK since I’m going to encrypt it before I send it, right?


Measurement Priorities

Seth Godin asks an excellent question:

Is something important because you measure it, or is it measured because it’s important?

I find that we tend to measure what we can, rather than working toward being able to measure what we should, in large part because some variation of this question is not asked.

I’m going to pick on malware fighting as a case-in-point.  Is there lots of nasty malware out there than can really destroy your infrastructure?  Absolutely, and as a result, IT and IT Security teams put tremendous effort into detecting and cleaning malware infections.

But how much of that malware actually impacts business, either by affecting the availability of the IT environment or producing a material incident?  If your business is like most, then the answer is hardly ever*.

So why does the security industry spend so much money and time (another form of money ) on malware?  Because we can.  Never mind that the stuff you should be truly worried about if you’re talking about protecting The Business (as opposed to The Infrastructure) is the APT/Custom Malware/Targeted Threat stuff, which is invisible on the anti-virus console?

Because they can.


* While that could be changing thanks to innovations like StuxNet, who honestly thinks that messing their business up is worth burning three 0days?  Really?  Get over yourself.

In the meantime, you can test this argument in your own environment.  Compare the number of pieces of malware you’ve detected and cleaned (versus prevented) versus the number that significantly impacted more than the infected person’s machine.

We’ve had one in the past year that might meet the disruption test, versus multiple malware cleanup tickets per week (not daily, but it tends to be spiky, so the average is greater than one per day.  Still…).  It took out a single user who had managed to break his Anti-Virus because either he didn’t like when the full scan ran or it kept stopping him from installing the trojaned, pirated software he’d downloaded–I never quite got a clear answer on this.  The infection jammed the mail queue with outbound spam, causing a degradation (but not disruption) of outbound email for a few hours.

Be celebratory, be very celebratory

A reminder for those of you who haven’t read or watched “V for Vendetta” one time too many, it’s Guy Fawkes Day today:

The plan was to blow up the House of Lords during the State Opening of Parliament on 5 November 1605…

…Fawkes, who had 10 years of military experience fighting in the Spanish Netherlands in suppression of the Dutch Revolt, was given charge of the explosives.

The plot was revealed to the authorities in an anonymous letter sent to William Parker, 4th Baron Monteagle, on 26 October 1605. During a search of the House of Lords at about midnight on 4 November 1605, Fawkes was discovered guarding 36 barrels of gunpowder – enough to reduce the House of Lords to rubble – and arrested.

Guy Fawkes day is a celebratory event in the UK with fireworks and bonfires.  It’s also when some of my ex-pat friends stock up on fireworks to ensure they can be suitably obnoxious on the 4th of July, but that’s another story…

So why is it that in England, a failed terror plot has become an excuse to have a party, whereas in the U.S., a failed or thwarted terror plot is  an excuse to strip away Civil Liberties?

Don't fight the zeitgeist, CRISC Edition

Some guy recently posted a strangely self-defeating link/troll/flame in an attempt to (I think) argue with Alex and/or myself regarding the relevance or lack thereof of ISACA’s CRISC certification.  Now given that I think he might have been doing it to drive traffic to his CRISC training site, I won’t show him any link love (although I’m guessing he’ll show up in comments and save me the effort).  Still, he called my Dad (“Mr. Howell”) out by name, which is a bit cheeky seeing as how my Dad left the mortal coil some time ago, so I’ll respond on Dear ol’ Dad’s behalf.

Now the funny thing about that is that I had pretty much forgotten all about CRISC, even though we’ve had a lot of fun with it here at the New School and made what I thought were some very good points about the current lack of maturity in Risk Management and why the last thing we need is another process-centric certification passing itself off as expertise.

I went back and re-read the original articles, and I think that they are still spot-on, so I decided that I would instead take another look at CRISC-The-Popularity-Contest and see who has turned out to be right in terms of CRISC’s relevance now that it’s been nine months almost to the day since ISACA announced it.

Quick, dear readers, to the GoogleCave!

Hmm…CRISC isn’t doing so well in the Long Run.  That’s a zero (0) for the big yellow crisc.

Of course, in the Long Run, we’re all dead, so maybe I should focus on a shorter time frame. Also, I see that either Crisco’s marketing team only works in the fall or most people only bake around the holidays.  If you had asked me, I would not have predicted that Crisco had a strong seasonality to it, so take whatever I say here with a bit of shortening.

Let’s try again, this time limiting ourselves to the past 12 months.

Nope…still nothing, although the decline of the CISSP seems to have flattened out a bit.  Also, we can definitely now see the spike in Crisco searches correlating to Thanksgiving and Christmas.  Looks like people don’t bake for Halloween (Too bad.  Pumpkin bread is yummy) and probably don’t bake well for Thanksgiving and Christmas if they have to google about Crisco.

Oh, well.  Sorry, CRISC.

Now, if you’ll excuse me, I have a cake to bake.

P.S.  Yes, I’m aware my screenshots overflow the right margin.  No, I’m not going to fix it.

CRISC? C-Whatever

Alex’s posts on Posts on CRISC are, according to Google, is more authoritative than the CRISC site itself:

Not that it matters.  CRISC is proving itself irrelevant by failing to make anyone care.  By way of comparison, I googled a few other certifications for the audit and security world, then threw in the Certified Public Accountant (CPA) for good measure.

Needless to say, CPA crushed the audit and security certs with ?30,700,000 Google hits.   CISM & CISA had 15,400,000 and 15,000,000, respectively.  The CISSP showed a not-disrespectable 9,390,000.

Then we got into what I will kindly call the “add-on” certs, even though they are frequently intended to be extensions or specialist certifications.  I chose the ISSAP and ISSMP, the post-CISSP Architecture & Security Management certifications from ISC^2.  ISSAP had 181,000 hits, ISSMP had only 69,000 hits, making it the only certification I checked that fared worse than CRISC.

Now that the data is out of they way, I can get to the real question.

Does no one care about CRISC because no one cares about yet-another-super-specialized-certification?  And/Or does no one care about CRISC because no one cares about risk assessment?

Well, given that googling “Risk Assessment” (in quotes) got me 12,400,000 hits, I’m going to go with yes on the first question and no on the second.

Now, combining Alex’s CRISC-O post with something Nick Selby said in a conversation he and I had a while back, “You can’t manage a risk you don’t understand,” then all a Risk Assessment Certification can even potentially do is imply that the holder knows how to follow a process–which I would argue is the least intellectually challenging and valuable part of any knowledge work activity.

Personally, I care a great deal about Risk Assessment, both as an interesting intellectual problem and also as a tool for solving real-world problems, even if I generally lack the time to do it right.  I certainly don’t have time to get certified as a Risk Assessor, nor do I feel the need.  Given my opinion that certifications are just a signalling mechanism in the hiring process, that should come as a surprise to no one.

Life without Certificate Authorities

Since it seems like I spent all of last week pronouncing that ZOMG!  SSL and Certificate Authorities is Teh Doomed!, I guess that this week I should consider the alternatives.  Fortunately, the Tor Project Blog, we learn what life is like without CA’s

Browse to a secure website, like You should get the intentionally scary “This Connection is Untrusted” certificate error page. However, you should expect this error as there are no more CAs to valid against. Click “I Understand the Risks”. Click “Add Exception”. Firefox should retrieve the certificate. Click “View”. This is where it gets interesting.

How do you validate the certificate? It depends on the other end. For sites I worry about, like my bank or favorite shopping stores, I call support and ask for the SSL fingerprint and serial number. Sometimes the support person even knows what I’m talking about. I suspect they just open their browser, click on the lock icon and read me the information. Generally, it takes some work to get the information. Further, I’ll compare the cert received through Tor and through non-Tor ssh tunnels on disparate hosts. However, you only have to do this checking once per cert. Once you have it, Firefox stores it as an exception and, if the cert doesn’t change between visits, doesn’t interrupt you with the cert error page.

Even this brings a few caveats:

Does the list of certs in my browser open me up to unique fingerprinting in some way? Would I notice if a Packet Forensics device was used? Unless someone screwed up, I doubt it. And a seldom asked question is, have I ever caught ssl certs being faked or changed by a man-in-the-middle? Yes I have.

And there’s the rub.  Even without using the CA as a proxy for trust, a suitably privileged attacker could still MITM that traffic stream.  So even going it alone is not a panacea, it arguably reduces risk of a successful non-government attacker (i.e. fraudster) by someone who breaches the CA’s verification processes.

Right now, though, I think that for most people, CA’s suffer the same shortfalls which Churchill famous ascribed to Democracy:  “The worst form of online identity verification except for all those others that have been tried.”

Perhaps, as the challenges of PKI alternatives like the Web of Trust, demonstrate, Trust is an inherently un-scalable concept (online).  If that is the case, how do we align and partition risk appropriately and, even more importantly, how do we do it in a manner that the average Internet user will get right, even if they don’t comprehend it?

Updated to correct a typo and clarify another