Best autoresponse message

As Brad Feld says, this is the best auto-responder in a long time:

I am currently out of the office on vacation.

I know I’m supposed to say that I’ll have limited access to email and won’t be able to respond until I return — but that’s not true. My blackberry will be with me and I can respond if I need to. And I recognize that I’ll probably need to interrupt my vacation from time to time to deal with something urgent.

That said, I promised my wife that I am going to try to disconnect, get away and enjoy our vacation as much as possible. So, I’m going to experiment with something new. I’m going to leave the decision in your hands:

If your email truly is urgent and you need a response while I’m on vacation, please resend it to interruptyourvacation@example.com and I’ll try to respond to it promptly.

If you think someone else at First Round Capital might be able to help you, feel free to email my assistant, Fiona (fiona@firstround.com) and she’ll try to point you in the right direction.

Otherwise, I’ll respond when I return…

Warm regards,
Josh

It avoids any lies, and drives responsibility and choice onto the sender. You can learn a lot about senders this way. It’s probably better than many background checks.

15 Years of Software Security: Looking Back and Looking Forward

Fifteen years ago, I posted a copy of “Source Code Review Guidelines” to the web. I’d created them for a large bank, because at the time, there was no single document on writing or reviewing for security that was broadly available. (This was a about four years before Michael Howard and Dave LeBlanc published Writing Secure Code or Gary McGraw and John Viega published Building Secure Software.)

So I assembled what we knew, and shared it to get feedback and help others. In looking back, the document describes what we can now recognize as an early approach to security development lifecycles, covering design, development, testing and deployment. It even contains a link to the first paper on fuzzing!

Over the past fifteen years, I’ve been involved in software security as a consultant, as the CTO of Reflective, a startup that delivered software security as a service, and as a member of Microsoft’s Security Development Lifecycle team where I focused on improving the way people threat model. I’m now working on usable security and how we integrate it into large-scale software development.

So after 15 years, I wanted to look forward a little at what we’ve learned and deployed, and what the next 15 years might bring. I should be clear that (as always) these are my personal opinions, not those of my employer.

Looking Back

Filling the Buffer for Fun and Profit
I released my guidelines 4 days before Phrack 49 came out with a short article called “Smashing The Stack For Fun And Profit.” Stack smashing wasn’t new. It had been described clearly in 1972 by John P. Anderson in the “Computer Security Technology Planning Study,” and publicly and dramatically demonstrated by the 1988 Morris Worm’s exploitation of fingerd. But Aleph1’s article made the technique accessible and understandable. The last 15 years have been dominated by important bugs which share the characteristics of being easily demonstrated as “undesired functionality” and being relatively easy to fix, as nothing should really depend on them.

The vuln and patch cycle
As a side effect of easily demonstrated memory corruption, we became accustomed to a cycle of proof-of-concept, sometimes a media cycle and usually a vendor response that fixed the issue. Early on, vendors ignored the bug reports or threatened vulnerability finders (who sometimes sounded like they were trying to blackmail vendors) and so we developed a culture of full disclosure, where researchers just made their discoveries available to the public. Some vendors set up processes for accepting security bug reports, with a few now offering money for such vulnerabilities, and we have a range of ways to describe various approaches to disclosure. Along the way, we created the CVE to help us talk about these vulnerabilities.

In some recent work, we discovered that the phrase “CVE-style vulnerability” was a clear descriptor that cut through a lot of discussion about what was meant by “vulnerability.” The need for terms to describe types of disclosure and vulnerabilities is an interesting window into how often we talk about it.

The industrialization of security
One effect of memory corruption vulnerabilities was that it was easy to see that the unspecified functionality were bugs. Those bugs were things that developers would fix. There’s a longstanding, formalist perspective that “A program that has not been specified cannot be
incorrect; it can only be surprising.” (“Proving a Computer System Secure“) That “formalist” perspective held us back from fixing a great many security issues. Sometimes the right behavior was hard to specify in advance. Good specifications are always tremendously expensive (although thats sometimes still cheaper than not having them.) When we started calling those things bugs, we started to fix them. And when we started to fix bugs, we got people interested in practical ways to reduce the number of those bugs. We had to organize our approaches, and discover which ones worked. Microsoft started sharing lots of its experience before I joined up, and that’s helped a great many organizations get started doing software security “at scale.”

Another aspect of the industrialization of security is the massive growth of security conferences. There are again, many types. There are hacker cons, there are vulnerability researcher cons, and there’s industry events like RSA. There’s also a rise in academic conferences. All of these (except BlackHat-style conferences) existed in 1996, but their growth has been spectacular.

Looking forward in software security

Memory corruption
The first thing that I expect will change is our focus on memory corruption vulnerabilities. We’re getting better at finding these early in the development with weakly typed languages, and better at building platforms with randomization built in to make the remainder harder to exploit. We’ll see a resurgence of command injection, design flaws and a set of things that I’m starting to think about as feature abuse. That includes things like Autorun, Javascript in PDFs (and heck, maybe Javascript in web pages), and also things like spam.

Human factors
Human factors in security will become even more obviously important, as more and more decisions will be required of the person because the computer just doesn’t know. Making good decisions is hard, and most of the the people we’ll ask to make decisions are not experts and reasonably prefer to just get their job done. We’re starting to see patterns like the “gold bars” and advice like “NEAT.” I expect we’ll learn a lot about how attacks work, how to build defenses, and coalesce around a set of reasonable expectations of someone using a computer. Those expectations will be slimmer than security experts will prefer, but good science and data will help make reasonable expectations clear.

Embedded systems
As software gets embedded in everything, so will flaws. Embedded systems will come with embedded flaws. The problems will hit not just Apple or Andriod, but cars, centrifuges, medical devices, and everything with code in it. Which will be a good approximation of everything. One thing we’ve seen is that applying modern vulnerability finding techniques to software released without any security testing is like kicking puppies. They’re just not ready for it. Actually, that’s a little unfair. It’s more like throwing puppies into a tank full of laser-equipped sharks. Most things will not have update mechanisms for a while, and when they do, updates will increasingly a battleground.

Patch Trouble
Apple already forces you to upgrade to the “latest and greatest,” and agree to the new EULA, before you get a security update. DRM schemes will block access to content if you haven’t updated. The pressure to accept updates will be intense. Consumer protection issues will start to come up, and things like the Etisalat update for Blackberry will become more common. These combined updates will impact on people’s willingness to accept updates and close windows of software vulnerability.

EULAs
EULA wars will heat up as bad guys get users to click through contracts forbidding them from removing the software. Those bad guys will include actual malware distributors, middle Eastern telecoms companies, and a lot of organizations that fall into a grey area.

Privacy
the interplay between privacy and security will get a lot more complex and nuanced as our public discourse gets less so. Our software will increasingly be able to extract all sorts of data but also to act on our behalfs in all sorts of ways. Compromised software will scribble racist comments on your Facebook wall, and companies like Social Intelligence will store those comments for you to justify ever-after.

Careers
Careers in software security will become increasingly diverse. It’s already possible to specialize in fuzzing, in static or dynamic analysis, in threat modeling, in security testing, training, etc. We’ll see lots more emerge over the next fifteen years.

Things we won’t see

We won’t see substantially better languages make the problem go away. We may move it around, and we may eliminate some of it, but PHP is the new C because it’s easy to write quick and dirty code. We’ll have cool new languages with nifty features, running on top of resilient platforms. Clever attackers will continue to find ways to make things behave unexpectedly.

A lack of interesting controversies.

(Not) looking into the abyss

There’s a couple of issues I’m not touching at all. They include cloud, because I don’t know what I want to say, and cyberwar, because I know what I don’t want to say. I expect both to be interesting.

Your thoughts?

So, that’s what I think I’ve seen, and that’s what I think I see coming. Did I miss important stories along the way? Are there important trends that will matter that I missed?

[Editor’s note: Updated to clarify the kicking puppy analogy.]

Change.

I’ve left Verizon.  A lot of folks have come up to me and asked, so I thought I’d indulge in a rather self-important blog-post and explain something:

It wasn’t about Verizon, but about the opportunity I’ve taken.

Wade, Chris, Hylender, Marc, Joe, Dave, Dr. Tippett & all the rest – they were all really, really great to me.  I love the RISK team.  Loved my job.  But the opportunity came up to do something, well, awesome.  So I left a great group and joined a financial institution as Director of Operational Risk.  If you ever get offered a chance to work for Wade at Verizon Terremark, I HIGHLY recommend it.

I’m looking forward to the change, and I’m looking forward to re-aligning my priorities.  Now that BlackHat & Defcon are over, now that Metricon is behind me, I’ve already started to scale back on SIRA management, reduced my role in the CSDM, and am in a refocusing phase.  I’ve missed blogging, and as I charge forward in “the real world” hope to share some scars and successes with you in this medium (data-driven, of course).

 

 

 

Nymwars: Thoughts on Google+

There’s something important happening around Google+. It’s the start of a rebellion against the idea of “government authorized names.” (A lot of folks foolishly allow the other side to name this as “real names,” but a real name is a name someone calls you.)

Let’s start with “Why Facebook and Google’s Concept of ‘Real Names’ Is Revolutionary” by “Alex Madrigal.” He explains why the idea is not only not natural, but revolutionary. Then move on to “Why it Matters: Google+ and Diversity, part 2” by “Jon Pincus.” From there, understand see “danah boyd” explain that ““Real Names” Policies Are an Abuse of Power . One natural reaction is ““If you don’t like it, don’t use it. It’s that simple.” ORLY?” as “Alice Marwick” explains, it’s really not that simple. That’s why people like “Skud” are continuing to fight, as shown in “Skud vs. Google+, round two.”

What’s the outcome? Egypt, Yemen and Saudi Arabia require real names. “South Korea is abandoning its “real name” internet policy

So how do we get there? “Identity Woman” suggested that we have a ““Million” Persona March on Google ,” but she’s now suspended. “Skud” posted “Nymwars strategy.”

This is important stuff for how we shape the future of the internet, and how the future of the internet shapes our lives. Even if you only use one name, you should get involved. Get involved by understanding why names matter, and get involved by calling people what they want to be called, not what Google wants to call them.

Securosis goes New School

The fine folks at Securosis are starting a blog series on “Fact-based Network Security: Metrics and the Pursuit of Prioritization“, starting in a couple of weeks.  Sounds pretty New School to me!  I suggest that you all check it out and participate in the dialog.  Should be interesting and thought provoking.

[Edit — fixed my mispelling of company name.  D’oh!]