Among those who understand that software is, almost without exception, full of security holes, there are at least three major orientations. I’ve recently seen three articles, all of which I wanted to talk about, but before I do I should explain how I’m using the word orientation, and the connotations it carries.
As used by John Boyd, orientation is the interaction of cultural traditions, genetic heritage, new information, previous experience, and analysis and synthesis, all of which filter new information as decisions are being made. Understanding the orientation of a person or organization is a powerful way to predict how they will act in response to new circumstances. Orientation is shaped by cultural tradition and experiences. Orientation is often presented as part of the Observe, Orient, Decide, Act (OODA) loop. The OODA loop is often seen as a tactical one, but Boyd discussed it on all levels, from a knife fight to grand strategy. I am using orientation in that broad sense here, and will assign labels, grossly oversimplified, to three of them.
I realize after I wrote this that all three of the people I’m quoting here are vastly smarter than perhaps I imply. My goal is not to attack any of them, but to contrast some of the background which informs their approaches. To draw out this contrast, I quote a little unfairly.
After the break, a bit of inside baseball on security orientations.
- Government: Brian Snow presented a paper entitled “We Need Assurance!” at an ACSAC conference. He opens:
When will we be secure? Nobody knows for sure – but it cannot happen before commercial security products and services possess not only enough functionality to satisfy customers’ stated needs, but also sufficient assurance of quality, reliability, safety, and appropriateness for use.
This is a stunning set of claims. Snow is asserting that commercial security products aren’t good enough to be used. Clearly, products are good enough to be used. Commercial organizations decide to devote their scarce resources to them, rather than other things. So there is a very important sense in which today’s products are plenty appropriate for use. Snow makes a set of comments about emerging threats which he believes (and I agree) will demonstrate that today’s products are insufficient. Those threats, generally organized crime, use techniques such as phishing, malware and rootkits to implement schemes such as identity theft and other forms of fraud. Our defenses are struggling to keep up.
The approach makes lots of sense from a historical perspective, and also from the agencies that do security deeply. They wish the commercial world would slow down and stop building so many features that keep distracting their captive agencies from security. This is slightly unfair to Snow, who also says:
Many vendors tell me that users are not willing to pay for assurance in commercial security products; I would remind you that Toyota and Honda penetrated U.S. markets in the 70’s by differentiating themselves from other brands by improving reliability and quality!
Gunnar Peterson has some good thoughts on the Snow paper in “The Road to Assurance.” One final comment: All the assurance in the world won’t fix liability transfers.
- The hacker orientation is deeply focused on the tools and techniques of exploration and exploitation at the micro and macro levels. What matters is the technical details. That’s where the truth of any security claim is proven or disproven. The orientation is shown by a comment like:
Coming back to audit “random” closed source code after having worked on MS binaries is a bit like auditing a “random” open-source project after having spent time on well-audited bits of OpenSSH. You’re surprised that things can be so easy.
(From Halvar Flake, “Microsoft Is Moving GUI Code.”) I don’t want to peg Halvar as a pure example of the hacker orientation. The sentence preceding my quote includes the phrase “operate under market conditions and thus can’t pump a few billion into security.” But his willingness to do deep audits, and his inability to actually come out and either praise Microsoft or admit that any product’s security is actually any good are deep, deep orientation.
Incidentally, I’ve been meaning to mention that Halvar is blogging at “ADD / XOR / ROL,” and I’ve added him to the blogroll, thanks to the Matasano folks. As you may know, I keep my blogroll very short, and urge you to read all of them.
- I think the most interesting view of security is coming from the economics camp. Start from the assumption that people are behaving rationally, if only we can understand their motivations. This approach is taken by Jeremy Epstein in what I think of as his “13 reasons” talk. I’ve seen him present it in a few ways, most recently in “SOA Security.” Iff you find the intro slow, skim down to the numbered list, and start reading carefully through the end of the article.
Having spent some time pondering this question – why so few people ask whether products are secure – I’ve actually taken the liberty of assembling a list of 13 potential responses.
- People assume the vendor takes care of it. When buying a new car, I don’t ask about the engineering processes used in the design; I assume Ford or Toyota knows more about how to design cars than I do. Why should the purchaser of software be responsible for asking how secure it is?
- They don’t know that they should ask. Some IT organizations (even in large companies) lack a dedicated internal security staff; instead, security is one aspect of everyone’s job. No one person has enough background to know what to ask, or how to make sense of the answers.
So there you have it. Three interesting views of software security. All with interesting nuance, and all worth reading. Each is a great example of an important orientation which you can use to
stereotype and annoy better understand your colleagues.