Why we need strong oversight & transparency

[The ACLU has a new] report, Policing Free Speech: Police Surveillance and Obstruction of First Amendment-Protected Activity (.pdf), surveys news accounts and studies of questionable snooping and arrests in 33 states and the District of Columbia over the past decade.

The survey provides an outline of, and links to, dozens of examples of Cold War-era snooping in the modern age.

“Our review of these practices has found that Americans have been put under surveillance or harassed by the police just for deciding to organize, march, protest, espouse unusual viewpoints and engage in normal, innocuous behaviors such as writing notes or taking photographs in public,” Michael German, an ACLU attorney and former Federal Bureau of Investigation agent, said in a statement.

Via Wired. Unfortunately, (as Declan McCullagh reports) “Police push to continue warrantless cell tracking,” and a host of other surveillance technologies which we have yet to grapple with.

For example, it seems FourSquare had an interesting failure of threat modeling, where they failed to grok the information disclosure aspects of some of their pages. See “White Hat Uses Foursquare Privacy Hole to Capture 875K Check-Ins.” To the extent that surveillance is opt-in, it is far less worrisome than when it’s built into the infrastructure, or forced on consumers via contract revisions.

Thinking about Cloud Security & Vulnerability Research: Three True Outcomes

When opining on security in “the cloud” we, as an industry, speak very much in terms of real and imagined threat actions.  And that’s a good thing: trying to anticipate security issues is a natural, prudent task. In Lori McVittie’s blog article, “Risk is not a Synonym for “Lack of Security”, she brings up an example of one of these imagined cloud security issues, a “jailbreak” of the hypervisor.

And that made me think a bit because essentially, we want to discuss, we want to know, “what will happen not if, but when vulnerabilities in the hypervisor, or cloud api, or some new technology bit of cloud computing actually are discovered and/or begin to be exploited?”

Now in baseball, sabermetricians (those who do big-time baseball analysis) break the game down into three true outcomes – walk, homerun, strikeout.  They are called the three true outcomes because they supposedly are the only events that do not involve the defensive team (other than the pitcher and catcher).  A player like Mark McGwire, Adam Dunn, or if you’re old-school, Rob Deer are described as three true outcome players because their statistics over-represent those outcomes vs. other probable in-play events (like a double or reach via error or what have you).  Rumor has it that Verizon’s Wade “swing for the fences” Baker was a three true outcome player in college.

In Vulnerability research and discovery, I see three true outcomes, as well.   Obviously the reality isn’t that an outcome is always one of the three, just as in baseball there are plenty of Ichiro Suzuki or David Eckstein-type players whose play does not lend itself to a propensity towards the three true baseball outcomes.  But for the purposes of discussing cloud risk, I see that any new category of vulnerability can be said to be:

1.)  Fundamentally unfixable (RFID, for example).  If this level of vulnerability is discovered, cloud computing might devolve back into just hosting or SaaS circa 2001 until something is re-engineered.

2.)  Addressed but continues to linger due to some fundamental architecture problem that, for reasons of backwards compatibility, cannot be significantly “fixed”, but rather involves patches over and over again on a constant basis.

3.)  Addressed and re-engineered so that the probability of similar, future events is dramatically reduced.

Now about a year ago, I said that for the CISO, the move to the cloud was the act of gracefully giving up control. The good news is that in terms of which of the three we see in the future, the CISO really has no ability to affect which outcome will exist.  But how cloud security issues are addressed long-term, via technology, information disclosure (actually the entropy of information disclosure), and contract language (how technology and entropy are addressed as part of the SLA) – these are the aspects of control the CISO must seek to impact.   Impact that is both contractual and on her part, procedural – expecting one of the three outcomes to eventually happen.

The technology and information disclosure aspects of that contract with the cloud provider, well, they simply must focus on the ability to detect and respond quickly.  Chances are your cloud farm won’t be among the first compromised when an eventual exploit occurs.  But, especially if the value of what you’ve moved to the cloud has significance to the broader threat community, that doesn’t mean (as Lori says) you simply accept the risk. Developing contingency plans based on the three true outcomes can help assure that the CISO can at least cope with the cloud transition.

RiskIT – Does ISACA Suffer From Dunning-Kruger?

Just to pile on a bit….

You ever hear someone say something, and all of the sudden you realize that you’ve been trying to say exactly that, in exactly that manner, but hadn’t been so succinct or elegant at it?  That someone much smarter than you had already thought about the subject a whole lot and there’s actually formal studies and definitions and so forth, and you feel dumb because there’s no way you could have actually googled for the subject in that way, but there it is on wikipedia in black and white?  Happens to me all the time.

Was reading the following from the NYT this morning on the Dunning-Kruger effect, and had a little bit of synchronicity when I realized that my entire problem with certifying people about risk and controls has to do with exactly this subject.  My issue with ISACA and CRISC is that because I know that there’s so  much that I don’t know, and indeed – that we, the infosec industry does not know, and so in my mind if we wanted to rationally, ethically “certify” someone about risk and controls as a domain expert, about the only thing we can do is to test that they are aware of their (and our) limitations.  To do otherwise seems rather irrational to me.

TAKE ALEX’S “APPARENTLY OK” RISK PROFESSIONAL EXAM:

A dozen years or so ago, I was PM for a firewall product that had to go through certification.  The certification involved several different tests, but at the time the key to certification was simply that your firewall would not pass packets when set in default-deny state.  And it cost a lot to certify.  This upset me to no end, mainly because I’d have rather spent the $50k certification cost on building a new GUI.

Knowing my frustration, one of the engineering team printed out Marcus Ranum’s “Apparently OK” firewall certification and taped to one of our boxes and congratulated me on getting certification.

With that spirit in mind, and with all apologies to Marcus, let me present Alex Hutton’s Apparently OK Risk Professional Certification Exam!  Because frankly, the problem isn’t with “using risk management” – no I’m still a very big proponent of that.  The problem is that risk analysis is steeped in critical thinking, and not identifying uncertainty is, well, less than professional in my opinion.

THE ALEX HUTTON “APPARENTLY OK” RISK PROFESSIONAL EXAM

Dear Prospective Risk Professional,

Congratulations on deciding to enter the exciting field of Information Risk Management!  Your journey will be confusing, frustrating, and if you get off on performing  sisyphean tasks, rewarding.

To achieve a state of “APPARENTLY OK” we ask that you take the following exam.  The exam is one question, and you have three (3) minutes to answer it.

For all the risk assessment methodologies inventoried by ENISA (http://rm-inv.enisa.europa.eu/rm_ra_methods.html) and for RiskIT, please tell us the how the assessment methodology is fundamentally incapable of delivering the results claimed.

BONUS:  Doing so in a total of two sentences.

There are multiple right answers.

Good Luck!

CRISC? C-Whatever

Alex’s posts on Posts on CRISC are, according to Google, is more authoritative than the CRISC site itself:

Not that it matters.  CRISC is proving itself irrelevant by failing to make anyone care.  By way of comparison, I googled a few other certifications for the audit and security world, then threw in the Certified Public Accountant (CPA) for good measure.

Needless to say, CPA crushed the audit and security certs with ?30,700,000 Google hits.   CISM & CISA had 15,400,000 and 15,000,000, respectively.  The CISSP showed a not-disrespectable 9,390,000.

Then we got into what I will kindly call the “add-on” certs, even though they are frequently intended to be extensions or specialist certifications.  I chose the ISSAP and ISSMP, the post-CISSP Architecture & Security Management certifications from ISC^2.  ISSAP had 181,000 hits, ISSMP had only 69,000 hits, making it the only certification I checked that fared worse than CRISC.

Now that the data is out of they way, I can get to the real question.

Does no one care about CRISC because no one cares about yet-another-super-specialized-certification?  And/Or does no one care about CRISC because no one cares about risk assessment?

Well, given that googling “Risk Assessment” (in quotes) got me 12,400,000 hits, I’m going to go with yes on the first question and no on the second.

Now, combining Alex’s CRISC-O post with something Nick Selby said in a conversation he and I had a while back, “You can’t manage a risk you don’t understand,” then all a Risk Assessment Certification can even potentially do is imply that the holder knows how to follow a process–which I would argue is the least intellectually challenging and valuable part of any knowledge work activity.

Personally, I care a great deal about Risk Assessment, both as an interesting intellectual problem and also as a tool for solving real-world problems, even if I generally lack the time to do it right.  I certainly don’t have time to get certified as a Risk Assessor, nor do I feel the need.  Given my opinion that certifications are just a signalling mechanism in the hiring process, that should come as a surprise to no one.

CRISC -O

PREFACE:  You might interpret this blog post as being negative about risk management here, dear readers.  Don’t. This isn’t a diatrabe against IRM, only why “certification” around information risk is a really, really silly idea.
Apparently, my blog about why I don’t like the idea of CRISC has long-term stickiness.  Just today, Philip writes in the comments:
Lets be PROACTIVE instead of critical. I would love to hear about what CAN be a better job practice and skill set that is needed. I am working on both the commercial and Department of Defense and develop programs for training and coaching the skills from MBA to IT Audit and all of technical security for our Certification of Information Assurance Workforce and conduct all the CISM/CISA training and review courses for ISACA in both commercial and military environments. I have worked on Risk Management for years at ERM as well as IT Security/Risk, and A common theme in all of this is RISK MANAGEMENT. When I discuss the Value of IT with MBA students or discuss CMMI with MIS students or development houses, or discuss why ITIL/Cobit or other discuss with business managers what will keep them from reaching their goals and objectives, it is ALL risk management put into a different taxonomy that that particular audience can understand.
I have not been impressed with the current Risk Management certifications that are available. I did participate in the job task analysis of ISACA (which is a VERY positive thing about how ISACA keeps their certifications) more aligned to practice. It is also not perfect, but I think it is a start. If we contribute instead of just complain, it can get better, or we can create something better. What can be better?
So Alex I welcome a personal dialog with you or others on what and how we can do it better. I can host a web conference and invite all who want to participate (upto 100 attendee capacity).
I’ll take you up on that offer, Philip.  Unfortunately, it’s going to be a very short Webex, because the answer is simple, “you can’t do risk certification better because you shouldn’t be doing it in the first place.”
That was kind of the point of my blog posts.
Just to be clear:
In IT I’m sort of seeing 2 types of certifications:
  1. Process based certifications (I can admin a checkpoint firewall, or active directory or what not)
  2. Domain knowledge based certifications (CISA, CISM)
The problems with a risk management certification are legion.  But to highlight a few in the context of Certifying individuals:
A).  Information Risk Management is not an “applied” practice of two domains.  CISM, CISA, and similar certs are mainly, you know how to X – now apply it to InfoSec.  IRM, done with more than a casual hand wave towards following a process because you have to, is much more complex than these, requiring more than just mashing up, say, “management” and “security”, or “auditing” and “security”.
(In fact, I’d argue that IRM shouldn’t be part of an MIS course load, rather it’s own tract with heavier influences from probability theory, history of science, complexity theory, economics, and epidemiology than, say, Engineering, Computer Science or MIS.)
B).  IRM is not a “process”. Now obviously certain risk management standards are a process. In my opinion, most risk management standards are nothing BUT a re-iteration of a Plan/Do/Check/Act process. And just to be clear, I have no problems if you want to go get certified in FAIR or OCTAVE or Blahdity-Blah – I’m all for that.  That shows that you’ve studied a document and can regurgitate the contents of that document, presumably on demand, and within the specific subjective perspective of those who taught you.
And similarly if ISACA wants to “certify” that someone can take their RiskIT document and be a domain expert at it, groovy.  Just don’t call that person “Certified in Risk and Information Systems Control™because they’re not.  They’re “Certified in our expanded P/D/C/A cycle that is yet another myopic way to list a bajillion risk scenarios in a manner you can’t possibly address before the Sun exhausts it’s supply of helium.” “TM”
RE-ITERATING THE POINT
Look, as my challenge to quantify the impact of risk reduction of a COBIT program suggests, IRM is more than these standards.
And I gotta be clear here, you’ve hit a pet peeve of mine, the whole “Let’s be PROACTIVE” thing.  First, criticism and dis-proof is part of the natural evolution of ideas.  To act like it isn’t is kinda bogus.  And like I said above, you’re assuming that there is something we should be doing about individual certification instead of CRISC – but THERE ISN’T ANY ALTERNATE, AND THERE SHOULD’NT BE.  You’re saying, “let’s verify people can ride their Unicorns properly into Chernobyl” and assuming I’m saying, you know, “maybe we shouldn’t ride Unicorns”.  I’m not.  I’m saying “we shouldn’t go to Chernobyl regardless of the means of transportation”.
And in terms of what we CAN do, well in my eyes – that’s SOIRA.  Now don’t get me wrong, as best as I understood Jay’s vision, it’s not a specific destination, it’s just a destination that isn’t Chernobyl.  I don’t know where it is going yet Phil, but I’m optimistic that Kevin, Jay, John, and Chris are pretty capable of figuring it out, and doing so because of passion, not because they want to sell more memberships, course materials, or certifications.  Either way, I’m just along for the ride, interested in driving when others get tired and playing a few mix tapes along the way.

Between an Apple and a Hard Place

So the news is all over the web about Apple changing their privacy policy. For example, Consumerist says “Apple Knows Where Your Phone Is And Is Telling People:”

Apple updated its privacy policy today, with an important, and dare we say creepy new paragraph about location information. If you agree to the changes, (which you must do in order to download anything via the iTunes store) you agree to let Apple collect store and share “precise location data, including the real-time geographic location of your Apple computer or device.”

Apple says that the data is “collected anonymously in a form that does not personally identify you,” but for some reason we don’t find this very comforting at all. There appears to be no way to opt-out of this data collection without giving up the ability to download apps.

Now, speaking as someone who was about to buy a new iphone (once the servers stopped crashing), what worries me is that the new terms are going to be in the new license for new versions of iTunes and iPhones.

Today, it’s pretty easy to not click ok. But next week or next month, when Apple ships a security update, they’re going to require customers to make a choice: privacy or security. Apple doesn’t ship patches for the previous rev of anything but their OS. iTunes problem? Click ok to give up your privacy, or don’t, and give up your security.

Not a happy choice, being stuck between an Apple and a hard place.

Bleh, Disclosure

Lurnene Grenier has a post up on the Google/Microsoft vunlerability disclosure topic. I commented on the SourceFire blog (couldn’t get the reminder from Zdnet about my password, and frankly I’m kind of surprised I already had an account – so I didn’t post there), but thought it was worth discussing my comments here a bit because  I think we can see a difference between evidence-based risk management or New School Security and expert opinion.  I’m not trying to rip on Lurene here, far from it.  But disclosure is such a crazy topic for our industry that I think we should look to back up all the logical assertions we make.

For example, Lurnene says:

“when a vulnerability becomes public it is no longer as useful for serious attackers”

I have to ask, do we have data set to support this claim?  What Lurnene is saying makes sense, right?  Bad guys like to use special toys for various reasons, not the least of which is our inability to Prevent or Detect those.  But to really test this hypothesis, we’d actually need to have a rational scale for describing threat capability and then match a frequency component for particular vulnerability/exploit uses for that population – and then compare that frequency component to a data set that describes known 0day use in data breaches.

Again from the SourceFire Blog:

“The companies with high-value data that are regularly attacked are able to proactively protect themselves.”

My 7th grade science teacher would toss this out on its ear as a hypothesis.  And I’m not just picking on Lurnene here, we (the security industry) do this all the time – making statements without enough definition around our loaded terms like “high-value data”, “regularly attacked” and “proactive protection”. As I say in my comments, my experience (small sample size warning) is that there isn’t necessarily always a correlation between “high-value data” (where high-value data is financial, medical, trade secret, or government/defense data) and ability/willingness to create “proactive protection”.

Even more frustrating is when he says “these companies patch within some time frame”.  Not faulting Lurnene here, just really lamenting that there isn’t a public data store where I can see “some time frame” and compare against data for “uptick” of attacks.

YOU DOWN WITH APT? (YEAH YOU KNOW ME)

“The loss due to a 20+ company exploit spree such as “Aurora” is significantly greater than the monetary loss due to low-end compromises which can be cleaned with off the shelf anti-virus tools.”

Reviewing the data sets I have at my disposal, I’m seeing:

1.) I don’t have a good estimate for hard (or soft) costs for “Aurora”, though I suppose I would accept a “high” qualitative label.

and

2.) data supporting that breaches of significant value are predominately caused by tools that are not able to be “cleaned with off the shelf anti-virus tools.” Rather, I’m seeing data that supports the notion that the for the significant portion of data breaches, the effort to prevent could have been classified as “simple and cheap” (source: VZ DBIR).

Finally, I’ll add my own editorial point here, just so Lurene can rip me back 🙂

I think I would have difficulty asserting that we should *only* care about ” large corporations, government, and military targets with the goals of industrial espionage and military superiority.” Off the top of my head, I can think of hundreds of millions of records exposed by data breaches that came from organizations we might say are in the “SMB market”.

BOTTOM LINE: Defending the faith might be a lot easier if there were data to support the defense.

Measuring The Speed of Light Using Your Microwave

Using a dish full of marshmallows.  We’re doing this with my oldest kids, and while I was reading up on it, I had to laugh out loud at the following:

…now you have what you need to measure the speed of light. You just need to know a very fundamental equation of physics:

Speed of a Wave (c) = Frequency (f) x Wavelength (L)

The distance between the melted sections of the marshmallow is in fact L/2, because there are two nodes for each wave (see animation). So if you have measured 6cm and your oven operates at 2450 MHz, then your measured speed of light is (0.12 x 2450,000,000) 294,000,000 metres per second.

The agreed value of the speed of light through a vacuum is 299,792,458 metres per second. See how accurately you can measure it? what could you do to make the experiment better, and thus get a closer answer?

IMHO, we need more published security metrics (and risk analytics) that don’t worry about those few million meters per second, and focus rather on the cleverness of using marshmallows and microwaves.

Alex on Science and Risk Management

Alex Hutton has an excellent post on his work blog:

Jim Tiller of British Telecom has published a blog post called “Risk Appetite, Counting Security Calories Won’t Help”. I’d like to discuss Jim’s blog post because I think it shows a difference in perspectives between our organizations. I’d also like to counter a few of the assertions he makes because I find these to be misunderstandings that are common in our industry.

“Anyone who knows me or has subjected themselves to my writings knows I have some uneasiness with today’s role of risk. It’s not the process, but more of how there is so much focus on risk as if it were a science – but it’s not. Not even close.”

Let me begin my rebuttal by first arguing that risk management, at its basis, is at least ”scientific work”. What I mean by that is elegantly summed up by Eliezer Yudkowsky on the Less Wrong blog. To use Eliezer’s words, I’ll offer that scientific work is “the reporting of the likelihood ratios for any popular hypotheses.”

You should go read “Risk Appetite: Counting Risk Calories is All You Can Do“.