Tag: Cloud

Some random cloudy thinking

Thanks to the announcement of Apple’s iCloud, I’ve been forced to answer several inquiries about The Cloud this week.  Now, I’m coming out of hiding to subject all of you to some of it…

The thing that you must never forget about The Cloud is that once information moves to The Cloud, you’ve inherently ceded control of that information to a third party and are at their mercy to protect it appropriately–usually trusting them to do so in an opaque manner.

What does that mean?  Well, start with the favored argument for The Cloud, the mighty “cost savings.”  The Cloud is not cheaper because the providers have figured out some cost savings magic that’s not available to your IT department.  It’s cheaper because their risk tolerances are not aligned to yours, so they accept risks that you would not merely because the mitigation is expensive.

Argument #2 is that it’s faster to turn up capability in the cloud–also a self-deception.  For anything but the most trivial application, the setup, configuration, and roll-out is much more time consuming than the infrastructure build.  Even when avoiding the infrastructure build produces non-trivial time savings, those savings are instead consumed by contract negotiations, internal build-vs-rent politics and (hopefully) risk assessments.

Finally, The Cloud introduces a new set of risks inherent in having your information in places you don’t control.  This morning, for example, Bruce Schneier again mentioned the ongoing attempts by the FBI to convince companies like Microsoft/Skype, Facebook and Twitter to provide backdoor access to unencrypted traffic streams from within their own applications.  These risks are even more exaggerated in products where you’re not the customer, but rather the product being sold (e.g. Facebook, twitter, Skype, etc.).  There, the customer (i.e. the Person Giving Them Money) is an advertiser or the FBI, et. al.  Your privacy interests are not (at least in the eyes of Facebook, et. al.) Facebook’s problem.

For those of you that like metaphors, in the physical world, I don’t (usually) choose to ride a motorcycle without full safety gear (helmet, jacket, pants, gloves, boots, brain).  I do, however, drive a car with only a minimum of safety gear (seatbelt, brain) because the risk profile is different.  In the Information Security world, I don’t usually advocate putting information whose loss or disclosure would impact our ability to ship products or maintain competitive advantage in the cloud (unless required by law, a Problem I Have) for the same reason.

That’s not to say that I’m opposed to the cloud–I’m actually a fan of it where it makes sense.  That means that I use the cloud where the utility exceeds the risk.  Cost is rarely a factor, to be honest.  But just like any other high-risk activities I engage in, I think it’s important to make honest, informed choices about the risks I’m accepting and then act accordingly.

Dear CloudTards: "Securing" The Cloud isn't the problem…

@GeorgeResse pointed out this article http://www.infoworld.com/d/cloud-computing/five-facts-every-cloud-computing-pro-should-know-174 from @DavidLinthicum today.  And from a Cloud advocate point of view I like four of the assertions.  But his point about Cloud Security is off:

“While many are pushing back on cloud computing due to security concerns, cloud computing is, in fact, as safe as or better than most on-premises systems. You must design your system with security, as well as data and application requirements in mind, then support those requirements with the right technology. You can do that in both public or private clouds, as well as traditional systems.”

In a sense, David is right, the ability to develop a relatively secure computing architecture in a cloud environment, in theory, may be reasonably similar to “traditional” computing.  But there’s two things I hate about this paragraph.  First, it seems to reflect this naive notion that systems are deployed secure until vulnerability happens. Second, it completely misses the issue facing security management.  The problems facing management re: The Cloud have nothing to do with ability to architect “secure”.  They have to do with the ability to manage risk.

A Primer About Information Security and Risk Management

Security, at its fundamental core, is not problem of poor network architecture development or poor software development practices.  Security is a problem of behaviors, those having to do with the interrelation of systems and people.  Managing risk is related, but very different in it’s nature.  Information risk management is a problem of information quality and decision making around those behaviors.  Information risk management requires:

  • Knowledge about the asset landscape – Data from what studies we do have about data breaches and successful IT operations strongly correlate visibility (even the degree of visibility) and variability in the asset landscape to success and failure in IT and IT security.
  • Knowledge about the threat landscape – types, frequency, strength, capability, and adaptability of the threat agents are among the bits of information required to know and understand risk.
  • Knowledge about the controls landscape – control information is the ability to resist threats, so not just the technical feasibility of resistance, but also the operational (human skills/resources) contributions to that ability to resist.
  • Knowledge about the impact landscape – impact information from pressures within the organization (things like response costs, downtime, and productivity losses) and from outside the organization (compliance fines, legal judgments, the consequences of IP loss, brand damage…).

In addition, there’s knowledge we synthesize when we consider one landscape in the context of another (vulnerability might be said to be the a state we develop when we consider threat, asset, and control landscape information, risk is what we  understand when we consider the information we have from all four).  In the diagram, it’s where the circles overlap.

I’m sorry if this is basic for many of you readers out there, but I thought this content was necessary – because it’s obvious cloud architect types are obsessing over the ability to build a similar technical environment without understanding the basics of managing risk.

What Really Bugs Security Managers About Cloud Computing

So the issue with moving to “the cloud” for a CISO has to do with two basic things:

  1. Information quality
  2. Responsibility (data ownership in CISSP terms).

For information quality, we are concerned with:

  • A – The ability to get reasonably similar information for the technical behavior information of our systems, and
  • B – The human behavior information from both the threat and the controls landscape.

For Responsibility, we are concerned with:

  • If the information is bad news, who is repsonsible for what actions?
  • Given threat execution (the bad news isn’t just an attack, it’s a compromise) When a data breach occurs, where will the buck ultimately stop?

For that last bullet, PCI is sort of establishing a “case law” for us already.  The lesson to take away from the experiences of others is this:  Following the suggestions of CSA documents and Cloud Audit information (excellent, necessary, and as useful as that documentation is/will be) isn’t going to be enough to manage risk in the cloud with the same quality as “traditional” architectures for many people.  And it looks like you will be left as “custodians” of the data regardless of who is paying the W-2 of the guy at the SEIM console.  More colloquially, “Crap will continue to run downhill, you’ve just diverted it a ways upstream.”

Takeaways

The objections to cloud adoption from an information risk management standpoint have nothing to do with the ability to engineer “secure”.   It is about an ability to manage risk.  There’s a significant difference there that this sort of advocacy continues to gloss over.  Of course, given how nascent information security and information risk managent are as disciplines, however, if you can transfer full responsibility to a cloud provider who is stupid enough to believe things like “we can secure your systems better than you can”, I say go for it!

Thinking about Cloud Security & Vulnerability Research: Three True Outcomes

When opining on security in “the cloud” we, as an industry, speak very much in terms of real and imagined threat actions.  And that’s a good thing: trying to anticipate security issues is a natural, prudent task. In Lori McVittie’s blog article, “Risk is not a Synonym for “Lack of Security”, she brings up an example of one of these imagined cloud security issues, a “jailbreak” of the hypervisor.

And that made me think a bit because essentially, we want to discuss, we want to know, “what will happen not if, but when vulnerabilities in the hypervisor, or cloud api, or some new technology bit of cloud computing actually are discovered and/or begin to be exploited?”

Now in baseball, sabermetricians (those who do big-time baseball analysis) break the game down into three true outcomes – walk, homerun, strikeout.  They are called the three true outcomes because they supposedly are the only events that do not involve the defensive team (other than the pitcher and catcher).  A player like Mark McGwire, Adam Dunn, or if you’re old-school, Rob Deer are described as three true outcome players because their statistics over-represent those outcomes vs. other probable in-play events (like a double or reach via error or what have you).  Rumor has it that Verizon’s Wade “swing for the fences” Baker was a three true outcome player in college.

In Vulnerability research and discovery, I see three true outcomes, as well.   Obviously the reality isn’t that an outcome is always one of the three, just as in baseball there are plenty of Ichiro Suzuki or David Eckstein-type players whose play does not lend itself to a propensity towards the three true baseball outcomes.  But for the purposes of discussing cloud risk, I see that any new category of vulnerability can be said to be:

1.)  Fundamentally unfixable (RFID, for example).  If this level of vulnerability is discovered, cloud computing might devolve back into just hosting or SaaS circa 2001 until something is re-engineered.

2.)  Addressed but continues to linger due to some fundamental architecture problem that, for reasons of backwards compatibility, cannot be significantly “fixed”, but rather involves patches over and over again on a constant basis.

3.)  Addressed and re-engineered so that the probability of similar, future events is dramatically reduced.

Now about a year ago, I said that for the CISO, the move to the cloud was the act of gracefully giving up control. The good news is that in terms of which of the three we see in the future, the CISO really has no ability to affect which outcome will exist.  But how cloud security issues are addressed long-term, via technology, information disclosure (actually the entropy of information disclosure), and contract language (how technology and entropy are addressed as part of the SLA) – these are the aspects of control the CISO must seek to impact.   Impact that is both contractual and on her part, procedural – expecting one of the three outcomes to eventually happen.

The technology and information disclosure aspects of that contract with the cloud provider, well, they simply must focus on the ability to detect and respond quickly.  Chances are your cloud farm won’t be among the first compromised when an eventual exploit occurs.  But, especially if the value of what you’ve moved to the cloud has significance to the broader threat community, that doesn’t mean (as Lori says) you simply accept the risk. Developing contingency plans based on the three true outcomes can help assure that the CISO can at least cope with the cloud transition.

Time To Patch, Patch Significance, & Types of Cloud Computing

Recently, a quote from Qualys CTO Wolfgang Kandek struck me kind of weird when I was reading Chris Hoff yet again push our hot buttons on cloud definitions and the concepts of information security survivability.  Wolfgang says (and IIRC, this was presented at Jericho in SF a couple of weeks ago, too):

In five years, the average time taken by companies to patch vulnerabilities had decreased by only one day, from 60 days to 59 days, at a time when the number of flaws and the speed at which they are being exploited has accelerated from weeks to, in some cases, days. During the same period, the number of IP scanned on an anonymous basis by the company from its customer base had increased from 3 million to a statistically significant 80 million, with the number of vulnerabilities uncovered rocketing from 3 million to 680 million. Of the latter, 72 million were rated by Qualys as being of ‘critical’ severity.

The bold emphasis added is mine.  Because it’s not clear if Wolfgang means the speed at which exploit code arises compared to vuln. disclosure (what Qualys calls “half life”, I believe George Hulme reminds us that “Qualys Half life is the number of days after which the number of vulnerable systems are reduced by 50%.”), or the speed at which those exploits are used against Qualys client networks in a significant manner.  The latter being information of enormous value if Qualys were to release a study on it.  The former, while informative, needs to be compared to what we actually know to be the case concerning the cause of incidents. So after reading the Qualys presentation and figuring they’re talking about the former, let’s get it on, shall we?

Sample size warnings and regular disclaimers, but recent data suggests that in the case where the initial breach “vector” was an exploit against a patchable vulnerability, a very small amount in representation of the aggregate breach data available, the patch had been around for six mos. to over one year.

So given that information, let’s discuss what Chris and Wolfgang are discussing.  Concentrating on Hoff’s Bullets #’s Four and Five:

4.  While SaaS providers who do “own the entire stack” are in a better position through consolidated multi-tenancy to transfer the responsibility of patching “their” infrastructure and application(s) on your behalf, it doesn’t really mean they do it any better on an application-by-application basis.  If a SaaS provider only has 1-2 apps to manage (with lots of customers) versus an enterprise with hundreds (and lost of customers,) the “quality” measurements as it relates to management of defect (from any perspective) would likely look better were you the competent SaaS vendor mentioned in this article.  You can see my point here.

One would hope, along with Qualys, that SaaS providers would be quicker to patch, as we might deduce that SaaS vulnerabilities should be of more significance than an unpatched system within an admittedly porous perimeter.   In SaaS you have one centrally located and relatively more exposed code base used by many customers.  If we use a little FAIR as a model, FAIR says that the Probability of Threat Action is due at least in part to perceived value by the Threat Agent.  So let me ask you to consider the relative value of a SaaS installation vs. one corporate installation of the same computing purpose (like, say, Xero accounting SaaS vs. “internal” accounting packages).   Thinking of the situation in this manner, we may deduce that there might be a greater need to “patch more quickly” for a SaaS provider.  But this is a risk issue that we can only speculate about – we have to be probabilistic due to the lack of similar models for the distribution of computing resources (i.e., The Cloud Future ™).

Of course this assertion, Patch More Quickly is Best, is complicated by the existing data that suggests that the vast majority of data breaches are not due to unpatched, known vulnerabilities, but rather the use of default credentials, SQL Injection problems that were previously unknown, or misconfigured ACLs.

Regarding Hoff’s bullet Five where he says that the cloud is not just SaaS, but also Platform as a Service (PaaS) and Infrastructure as a Service (IaaS),  we could apply the same line of thinking and hazard that PaaS and IaaS aspects of “The Cloud Future ™”, deductively, should suffer the same probable frequency of loss events unless/until we cover the absolute basics of risk issue mitigation, asset and event visibility, and asset variability reduction.  In fact, we might speculate that because we’re giving up some control over visibility and variability management – that probable Loss Event Frequency should be greater, than current state, not smaller, as Qualys would have us hope.

SaaS, PaaS, or IaaS – I just can’t see things getting better until we are able to address the basic, fundamental problems of Information Security.  Rarely, if ever, does adding more complexity result in greater systematic simplicity.

Navigation