Pulling A Stiennon: In The Cloud, The DMZ Is Dead

Calling something in the cloud a DMZ is just weird. Realistically, everything is a DMZ. After all, you are sharing data center space, and if your provider is using virtualization, hardware with all of their other customers. As such, each and every network segment you have is (or should be) isolated and have only a very small set of allowed ports/protocols/ips etc. So in a very real sense, in public cloud every network segment is a DMZ. And when everything is a DMZ, then calling anything a DMZ becomes pointless.

It’s better to call the segments by their function, e.g. web, app server, db, cache, mq whatever it is that the services in that security group are doing. It had the advantage of being easier to understand, closer to self-documenting and doesn’t imply a level of non-existent security like a term like DMZ does. Also by calling segments by their purpose, it points the security practitioner towards the right mindset of what types of traffic should or shouldn’t be allowed. All in all a very Jericho project kind of mentality.

[ETA: I had completely forgotten that Hoff covered this same issue in his Commode Computing talk last year. In particular see http://pic.twitter.com/wrx7F17R]

Continuous Deployment and Security

From an operations and security perspective, continuous deployment is either the best idea since sliced bread or the worst idea since organic spray pancakes in a can. It’s all of matter of execution. Continuos deployment is the logical extension of the Agile development methodology. Adam recently linked to an study that showed that a 25% increase in features lead to a 200% increase in code complexity, so by making change sets smaller we dramatically  decrease the complexity in each release. This translates to a much lower chance of failure.  Smaller change sets also mean that rolling back in the case of a failure state is also much easier. Finally, smaller change sets make identifying what broke unit and integration tests easier and far easier to code review which increases the chances of catching serious issues prior to deployment. All of this points to building systems that are more stable, more reliable, have less downtime and are easier to secure. This assumes, of course, that you are doing continuos deployment well.

In order for continuous deployment (and DevOps in general) to be successful there needs to be consistent process and automation. There are lots of other factors as well, such as qualified developers, proper monitoring, the right deployment tools but those are for another discussion.

Consistent processes are essential if you are to guarantee that the deployment happens the same way every time. To put it bluntly, when it comes to operations and security, variation is evil. Look to Gene Kim’s research (Visual Ops, Visual Ops Security) or more traditional manufacturing methodologies like Six-Sigma for a deep dive into why variation is so very very bad. The short version though is that in manufacturing, variation  means products you can’t sell. In IT, variation means downtime, performance issues, and security issues. At the most basic level, if you are making changes and you are making changes to how you make the changes, you create a much harder situation from which to troubleshoot. This translates to longer incident response times and longer times to recovery which nobody wants. Especially in an online business.

The easiest way to keep deployment process consistent is to remove the human element as much as possible. In other words, automate as much it as possible. This has the added advantage of saving the humans for reviewing errors and identifying potential issues faster. It doesn’t matter which automation mechanism you use as long as it’s stable and supports your operating environment well. Ideally, it will either be the same system as currently being used the by the operations and applications teams (e.g. chef, puppet, cfengine) or be one that can integrated with those systems (e.g. hudson/jenkins).

With good check-in/build release messages, you even get automated logging for your change management systems and updates to your configuration management database (CMDB).

Some random cloudy thinking

Thanks to the announcement of Apple’s iCloud, I’ve been forced to answer several inquiries about The Cloud this week.  Now, I’m coming out of hiding to subject all of you to some of it…

The thing that you must never forget about The Cloud is that once information moves to The Cloud, you’ve inherently ceded control of that information to a third party and are at their mercy to protect it appropriately–usually trusting them to do so in an opaque manner.

What does that mean?  Well, start with the favored argument for The Cloud, the mighty “cost savings.”  The Cloud is not cheaper because the providers have figured out some cost savings magic that’s not available to your IT department.  It’s cheaper because their risk tolerances are not aligned to yours, so they accept risks that you would not merely because the mitigation is expensive.

Argument #2 is that it’s faster to turn up capability in the cloud–also a self-deception.  For anything but the most trivial application, the setup, configuration, and roll-out is much more time consuming than the infrastructure build.  Even when avoiding the infrastructure build produces non-trivial time savings, those savings are instead consumed by contract negotiations, internal build-vs-rent politics and (hopefully) risk assessments.

Finally, The Cloud introduces a new set of risks inherent in having your information in places you don’t control.  This morning, for example, Bruce Schneier again mentioned the ongoing attempts by the FBI to convince companies like Microsoft/Skype, Facebook and Twitter to provide backdoor access to unencrypted traffic streams from within their own applications.  These risks are even more exaggerated in products where you’re not the customer, but rather the product being sold (e.g. Facebook, twitter, Skype, etc.).  There, the customer (i.e. the Person Giving Them Money) is an advertiser or the FBI, et. al.  Your privacy interests are not (at least in the eyes of Facebook, et. al.) Facebook’s problem.

For those of you that like metaphors, in the physical world, I don’t (usually) choose to ride a motorcycle without full safety gear (helmet, jacket, pants, gloves, boots, brain).  I do, however, drive a car with only a minimum of safety gear (seatbelt, brain) because the risk profile is different.  In the Information Security world, I don’t usually advocate putting information whose loss or disclosure would impact our ability to ship products or maintain competitive advantage in the cloud (unless required by law, a Problem I Have) for the same reason.

That’s not to say that I’m opposed to the cloud–I’m actually a fan of it where it makes sense.  That means that I use the cloud where the utility exceeds the risk.  Cost is rarely a factor, to be honest.  But just like any other high-risk activities I engage in, I think it’s important to make honest, informed choices about the risks I’m accepting and then act accordingly.

Cloudiots on Parade

UPDATE: Should have known Chris Hoff would have been all over this already. From the Twitter Conversation I missed last night:

Chris, I award you an honorary NewSchool diploma for that one.
——————————————————————————-

From:  Amazon Says Cloud Beats Data Center Security where Steve Riley says, “in no uncertain terms: it’s more secure there than in your data center.”  Groovy.  I’m ready to listen.  Steve’s proof?

AWS is working on an Internet protocol security (IPsec) tunnel connection between EC2 and a customer’s data center to allow a direct, management network to EC2 virtual machines.

Well, bad guys might as well give up their metasploit now, huh?  Pack it in fellas, Amazon’s got IPSec tunnels!

Any virtual machine generating communications traffic is forced to route the traffic off the host server and onto the data center’s physical network, where it can be inspected. A virtual machine’s attempt to communicate with another virtual machine on the same server is refused. “We prohibit instance-to-instance communication,” another security measure, Riley said.

“inspection” “refused” “prohibit instance-to-instance communication”.  These are all relatively soothing words to some, and granted, it’s kind of *all* we can do – but to outright say”cloud is more secure” that’s a pretty big claim.  And one that needs to be substantiated by, oh, what’s the word I’m looking for…. data?  Or even a logical model, would be interesting, really.

Sorry Steve, I’m NewSchool, I can’t just take your word for it.

The problem is that our current ability to inspect rarely prevents any significant threat, and is very difficult to operate efficiently as a detective control.  Refusing/prohibiting non-specified intra VM communication is great.  Happy to hear about it.  And I’m thrilled that there’s never, ever been any vulnerability and any associated code and that it’s the bestest-estest ever and will never ever have any other vulnerabilities in them.

Look, I’m not saying that using the cloud can’t meet your risk tolerance.  I’m cool with cloud computing.  I’m not saying “run away from the cloud ahhhhhhhh” or any such nonsense.

What I am saying is that from what we know about software and network security, I find it hard to believe that adding (non-security) computing functions and complexity makes things *more* secure than an exact similar environment *without* that extra computing.

Information Security is not “there’s a weak girder in a bridge so architect a solution to reinforce the bridge”.  But unfortunately I have this sinking feeling that as long as the “cloud security” discussion is dominated by IT architects with half a security clue presenting these sorts of engineering solutions with that sort of mindset, we’re just going to have to live with them missing the point.

Illogical Cloud Positivism

Last we learned, Peter Coffee was Director of Platform Research for salesforce.com.  He also blogs on their corporate weblog, CloudBlog, a blog that promises “Insights on the Future of Cloud Computing”.
He has a post up from last week that called “Private Clouds, Flat Earths, and Unicorns” within which he tries to “bust some myths” about Cloud Security. Now most of the time, corporate blogs are almost always light on content, but at least most of the time they are banal, emasculated vagaries that can neither be too stupid nor particularly insightful.  Well, not this one.
This blog post is so dangerously filled with complete and utter cowpie that I could not but respond here – it is fundamentally contrary to the concerns James Arlen and I wrote for the IRM chapter of the CSA document.
It is a completely Cloud-tarded article.
WHAT?  A SAAS PROVIDER RAILING AGAINST PRIVATE CLOUDS?!?!  I CAN’T BELIEVE IT!!!
First, Peter makes the assertion that when IT managers are surveyed and state a strong preference for “private clouds” and that this is a desire, not their choice, and subsequently ridicules IT managers for having that desire.  His claim is that:
“To have physical possession of the data, you also have to own and maintain the data storage hardware and software.”
This is a dangerous assertion for a cloud provider to make.  The semantic game he’s playing here is around the word “possession”.  I’ll agree that it is impossible to have physical possession without ownership of the infrastructure.  Fine.  But this is NOT what security and responsibility (both ethical and legal responsibility) in cloud computing is about.  Cloud Risk Management is not about possession, it is about custodianship.  If I put credit card numbers in your (his) cloud, I am still, in the eyes of class action lawsuits at least, it’s custodian.  Looking at all forms of loss associated with a data breach, there is no pure, 100% transfer of risk.  And *this* is the problem IT managers face.
COFFEE’S NAIVE IMPACT MODELING
A desire for security (more below) operations stems solely from the fact that risk is not completely transferred.  What do I mean?  If my bank puts my banking data in Salesforce.com and there’s a breach, yes – I’m going to be mad at Salesforce.  But I’m going to pull my money (and it’s earning potential) out of my bank because some Cloudtard over there decided that this was a good idea.
In other words (use ISO 27005, FAIR, VERIS, whatever) Primary Loss Factors are only half the impact model.  And maybe the Cloud Provider owns some of that loss, maybe they don’t.  Secondary Factors are still borne solely by those that use the cloud provider.
BEING GOOD AT OPSEC IS A CRITICAL COMPONENT IN RISK MANAGEMENT
“If you have operational control of the infrastructure you have to employ and supervise a “team of experts” who spend most of their time waiting to respond to critical but rare events.”
One has to but wonder how often Peter talks to his own OpSec guys.  I imagine (sincerely  hope) that the CIRT team at Salesforce absolutely cringed when they read this.  Indeed, my sympathies go out to you,  Your Supreme Galactic Director Of Platform just called you a mostly useless operating expense, and you may have a little internal selling to do.
That aside, anyone who has heard me speak for the last year knows the stress I put on understanding your OpSec capabilities as a component of managing risk.  I won’t rehash those presentations (they’re online – use The Google) but both understanding the Cloud Providers capabilities and your ability to interact with them – that is the crux of managing risk in the cloud.
ILLOGICAL CLOUD POSITIVISM
“Security in cloud services can be constructed, maintained and operated at levels that are far beyond what’s cost-effective for almost any individual company or organization.”
The irony of this quote, of course, is that his linked supporting evidence is SalesForce’s statement of ISO 27001 compliance.  ’nuff said.
But in regards to the actual argument, the answer is: “maybe”.  If InfoSec is the study of the collision of multiple complex systems, and the cloud provider has a much more complex system to manage than you do on your own, the answer must be “maybe”.  To support such a statement we must identify models that are informative about cost to secure per $.  I haven’t seen much research around this, and certainly Coffee doesn’t link to any.
COMPLIANCE ISN’T A “ROLES” AUDIT
“The discipline and clarity of service invocations in true cloud environments can greatly aid the control of access, and the auditability of actions, that are dauntingly expensive and complex to achieve in traditional IT settings.”
I honestly have no idea what this means.  I take it that he’s saying that proof of access control and audit evidence is easier to gain just by casually tossing an email in the direction of your SaaS buddy.  But in terms of regulatory compliance (PCI DSS, HIPAA, OTHER ALPHABET SOUP) my experience is vastly different.
HEY, TECHNOLOGY IS TECHNOLOGY, RIGHT?
“Customization and integration of cloud services are neither intrinsically better nor inherently worse than the capabilities of an on-premise stack.”
This, of course, viewed in the context of the top two points Coffee makes, for the purposes of risk management, are statements with contradictory implications.  Let me some up what Peter says to me when I read this article:
“The cloud is easy to use. The cloud is easier to secure.  The cloud is friendly to auditors.” and   “The cloud is neither better nor worse than securing data on site.”
I’M STILL OK WITH THE CLOUD
With my years of SMB F.I. audit/PenTest experience, I don’t disagree that cloud computing may make sense from a risk management standpoint.  I’ve been appalled at times about the one guy in the middle of nowhere Midwest whose in charge of $200 million in credit union/pension fund assets.  The other thing to remember is that in fact, in many ways these SMB FI folks have been offloading financial machinations to partners for years – they are already “cloud users” by some informal lose definition.  But it’s an awfully cavalier attitude Mr. Coffee takes here with us making these assertions, esp. with no real supporting evidence.  Welcome to the NewSchool, Peter Coffee.  In God We Trust, everyone else brings data.

Thinking about Cloud Security & Vulnerability Research: Three True Outcomes

When opining on security in “the cloud” we, as an industry, speak very much in terms of real and imagined threat actions.  And that’s a good thing: trying to anticipate security issues is a natural, prudent task. In Lori McVittie’s blog article, “Risk is not a Synonym for “Lack of Security”, she brings up an example of one of these imagined cloud security issues, a “jailbreak” of the hypervisor.

And that made me think a bit because essentially, we want to discuss, we want to know, “what will happen not if, but when vulnerabilities in the hypervisor, or cloud api, or some new technology bit of cloud computing actually are discovered and/or begin to be exploited?”

Now in baseball, sabermetricians (those who do big-time baseball analysis) break the game down into three true outcomes – walk, homerun, strikeout.  They are called the three true outcomes because they supposedly are the only events that do not involve the defensive team (other than the pitcher and catcher).  A player like Mark McGwire, Adam Dunn, or if you’re old-school, Rob Deer are described as three true outcome players because their statistics over-represent those outcomes vs. other probable in-play events (like a double or reach via error or what have you).  Rumor has it that Verizon’s Wade “swing for the fences” Baker was a three true outcome player in college.

In Vulnerability research and discovery, I see three true outcomes, as well.   Obviously the reality isn’t that an outcome is always one of the three, just as in baseball there are plenty of Ichiro Suzuki or David Eckstein-type players whose play does not lend itself to a propensity towards the three true baseball outcomes.  But for the purposes of discussing cloud risk, I see that any new category of vulnerability can be said to be:

1.)  Fundamentally unfixable (RFID, for example).  If this level of vulnerability is discovered, cloud computing might devolve back into just hosting or SaaS circa 2001 until something is re-engineered.

2.)  Addressed but continues to linger due to some fundamental architecture problem that, for reasons of backwards compatibility, cannot be significantly “fixed”, but rather involves patches over and over again on a constant basis.

3.)  Addressed and re-engineered so that the probability of similar, future events is dramatically reduced.

Now about a year ago, I said that for the CISO, the move to the cloud was the act of gracefully giving up control. The good news is that in terms of which of the three we see in the future, the CISO really has no ability to affect which outcome will exist.  But how cloud security issues are addressed long-term, via technology, information disclosure (actually the entropy of information disclosure), and contract language (how technology and entropy are addressed as part of the SLA) – these are the aspects of control the CISO must seek to impact.   Impact that is both contractual and on her part, procedural – expecting one of the three outcomes to eventually happen.

The technology and information disclosure aspects of that contract with the cloud provider, well, they simply must focus on the ability to detect and respond quickly.  Chances are your cloud farm won’t be among the first compromised when an eventual exploit occurs.  But, especially if the value of what you’ve moved to the cloud has significance to the broader threat community, that doesn’t mean (as Lori says) you simply accept the risk. Developing contingency plans based on the three true outcomes can help assure that the CISO can at least cope with the cloud transition.