Survey for How to Measure Anything In Cybersecurity Risk

This is a survey from Doug Hubbard, author of How To Measure Anything and he is
currently writing another book with Richard Seiersen (GM of Cyber Security at
GE Healthcare) titled How to Measure Anything in Cybersecurity Risk. As part of
the research for this book, they are asking for your assistance as an
information security professional by taking the following survey. It
is estimated to take 10 to 25 minutes to complete. In return for participating
in this survey, you will be invited to attend a webinar for a sneak-peek on the
book’s content. You will also be given a summary report of this survey. The
survey also includes requests for feedback on the survey itself.

https://www.surveymonkey.com/r/VMHDQ7N

A Mini-Review of "The Practice of Network Security Monitoring"

NSM book coverRecently the kind folks at No Starch Press sent me a review copy of Rich Bejtlich’s newest book The Practice of Network Security Monitoring and I can’t recommend it enough. It is well worth reading from a theory perspective, but where it really shines is digging into the nuts and bolts of building an NSM program from the ground up. He has essentially built a full end to end tutorial on a broad variety of tools (especially Open Source ones) that will help with every aspect of the program, from collection to analysis to reporting.

As someone who used to own security monitoring and incident response for various organizations, the book was a great refresher on the why and wherefores of building an NSM program and it was really interesting to see how much the tools have evolved over the last 10 years or so since I was in the trenches with the bits and bytes. This is a great resource though regardless of your level of experience and will be a great reference work for years to come. Go read it…

Active Defense: Show me the Money!

Over the last few days, there’s been a lot of folks in my twitter feed talking about “active defense.” Since I can’t compress this into 140 characters, I wanted to comment quickly: show me the money. And if you can’t show me the money, show me the data.

First, I’m unsure what’s actually meant by active defense. Do the folks arguing have a rough consensus on what’s in and what’s out? If not, (or more) would be useful. Just so others can follow the argument.

So anyway, my questions:

  1. Do organizations that engage in Active Defense suffer fewer incidents than those who don’t?
  2. Do organizations that engage in Active Defense see smaller cost-per-incident when using it than when not? (or in comparison to other orgs?)
  3. How much does an Active Defense program cost?
  4. Is that the low cost way to achieve the better outcomes than other ways to get the outcomes from 1 & 2?

I’m sure some of the folks advocating active defense in this age of SEC-mandated incident disclosure can point to incidents, impacts and outcomes.

I look forward to learning more about this important subject.

Oracle's 78 Patches This Quarter, Whatever…

There’s been a lot of noise of late because Oracle just released their latest round of patches and there are a total of 78 of them. There’s no doubt that that is a lot of patches. But in and of itself the number of patches is a terrible metric for how secure a product is. This is even more the case of companies that bundle all of their patches for all of their product lines at once. Most of the chatter I’ve seen, implies that all 78 are for the main Oracle database, but if you read their announcement, you’ll see the breakdown is as follows:

Oracle Database Server – 2 patches
Oracle Fusion Middleware – 11 patches
Oracle E-Business Suite – 3 patches
Oracle Supply Chain Products Suite – 1 patch
Oracle PeopleSoft – 6 patches
Oracle JD Edwards – 8 patches
Oracle Sun Products – 17 patches
Oracle Virtualization – 3 patches
Oracle MySQL – 27 patches

Fully 60% of the above patches are from OSS products. So which is more secure: open source or closed source. Or let’s compare Oracle DB vs MySQL: 2 versus 27 patches?

What do these numbers tell you? Absolutely nothing. Even with something like CVSS you still can’t tell which product is more secure. The whole thing is a load of malarkey. The product that is and will remain most secure is the one that you can manage and maintain the easiest for your organization.

Seattle in the Snow

Seattle snow (From The Oatmeal.)

It’s widely understood that Seattle needs a better way to measure snowfall. However, what’s lacking is a solid proposal for how to measure snowfall around here. And so I have a proposal.

We should create a new unit of measurement: The Nickels. Named after Greg Nickels, who lost the mayorship of Seattle because he couldn’t manage the snow.

Now, there’s a couple of ways we could define the Nickels. It could be:

  • The amount of snow needed to cost a Mayor 10 points of approval rating
  • The amount of snow needed to cause a bus to slide down Olive way and teeter over the highway
  • 2 millimeters
  • Enough snow to reduce the coefficient of city road friction by 1%.

I’m not sure any of these are really right, so please suggest other ways we could define a Nickels in the comments.

Lean Startups & the New School

On Friday, I watched Eric Ries talk about his new Lean Startup book, and wanted to talk about how it might relate to security.

Ries concieves as startups as businesses operating under conditions of high uncertainty, which includes things you might not think of as startups. In fact, he thinks that startups are everywhere, even inside of large businesses. You can agree or not, but suspend skepticism for a moment. He also says that startups are really about management and good decision making under conditions of high uncertainty.

He tells the story of IMVU, a startup he founded to make 3d avatars as a plugin instant messenger systems. He walked through a bunch of why they’d made the decisions they had, and then said every single thing he’d said was wrong. He said that the key was to learn the lessons faster to focus in on the right thing–that in that case, they could have saved 6 months by just putting up a download page and seeing if anyone wants to download the client. They wouldn’t have even needed a 404 page, because no one ever clicked the download button.

The key lesson he takes from that is to look for ways to learn faster, and to focus on pivoting towards good business choices. Ries defines a pivot as one turn through a cycle of “build, measure, learn:”

Learn, build, measure cycle

Ries jokes about how we talk about “learning a lot” when we fail. But we usually fail to structure our activities so that we’ll learn useful things. And so under conditions of high uncertainty, we should do things that we think will succeed, but if they don’t, we can learn from them. And we should do them as quickly as possible, so if we learn we’re not successful, we can try something else. We can pivot.

I want to focus on how that might apply to information security. In security, we have lots of ideas, and we’ve built lots of things. We start to hit a wall when we get to measurement. How much of what we built changed things (I’m jumping to the assumption that someone wanted what you built enough to deploy it. That’s a risky assumption and one Ries pushes against with good reason.) When we get to measuring, we want data on how much your widget changed things. And that’s hard. The threat environment changes over time. Maybe all the APTs were on vacation last week. Maybe all your protestors were off Occupying Wall Street. Maybe you deployed the technology in a week when someone dropped 34 0days on your SCADA system. There are a lot of external factors that can be hard to see, and so the data can be thin.

That thin data is something that can be addressed. When doctors study new drugs, there’s likely going to be variation in how people eat, how they exercise, how well they sleep, and all sorts of things. So they study lots of people, and can learn by comparing one group to another group. The bigger the study, the less likely that some strange property of the participants is changing the outcome.

But in information security, we keep our activities and our outcomes secret. We could tell you, but first we’d have to spout cliches. We can’t possibly tell you what brand of firewall we have, it might help attackers who don’t know how to use netcat. And we certainly can’t tell you how attackers got in, we have to wait for them to tell you on Pastebin.

And so we don’t learn. We don’t pivot. What can we do about that?

We can look at the many, many people who have announced breaches, and see that they didn’t really suffer. We can look at work like Sensepost has offered up at BlackHat, showing that our technology deployments can be discovered by participation on tech support forums.

We can look to measure our current activities, and see if we can test them or learn from them.

Or we can keep doing what we’re doing, and hope our best practices make themselves better.

Securosis goes New School

The fine folks at Securosis are starting a blog series on “Fact-based Network Security: Metrics and the Pursuit of Prioritization“, starting in a couple of weeks.  Sounds pretty New School to me!  I suggest that you all check it out and participate in the dialog.  Should be interesting and thought provoking.

[Edit — fixed my mispelling of company name.  D’oh!]

Emergent Map: Streets of the US

This is really cool. All Streets is a map of the United States made of nothing but roads. A surprisingly accurate map of the country emerges from the chaos of our roads:

Allstreets poster

All Streets consists of 240 million individual road segments. No other features — no outlines, cities, or types of terrain — are marked, yet canyons and mountains emerge as the roads course around them, and sparser webs of road mark less populated areas. More details can be found here, with additional discussion of the previous version here.

In the discussion page, “Fry” writes:

The result is a map made of 240 million segments of road. It’s very difficult to say exactly how many individual streets are involved — since a winding road might consist of dozens or even hundreds of segments — but I’m sure there’s someone deep inside the Census Bureau who knows the exact number.

Which raises a fascinating question: is there a Platonic definition of “a road”? Is the question answerable in the sort of concrete way that I can say “there are 2 pens in my hand”? We tend to believe that things are countable, but as you try to count them in larger scales, the question of what is a discrete thing grows in importance. We see this when map software tells us to “continue on Foo Street.” Most drivers don’t care about such instructions; the road is the same road, insofar as you can drive in a straight line and be on what seems the same “stretch of pavement.” All that differs is the signs (if there are signs). There’s a story that when Bostonians named Washington Street after our first President, they changed the names of all the streets as they cross Washington Street, to draw attention to the great man. Are those different streets? They are likely different segments, but I think that for someone to know the number of streets in the US requires not an ontological analysis of the nature of street, but rather a purpose-driven one. Who needs to know how many individual streets are in the US? What would they do with that knowledge? Will they count gravel roads? What about new roads, under construction, or roads in the process of being torn up? This weekend of “carmageddeon” closing of 405 in LA, does 405 count as a road?

Only with these questions answered could someone answer the question of “how many streets are there?” People often steam-roller over such issues to get to answers when they need them, and that may be ok, depending on what details are flattened. Me, I’ll stick with “a great many,” since it is accurate enough for all my purposes.

So the takeaway for you? Well, there’s two. First, even with the seemingly most concrete of questions, definitions matter a lot. When someone gives you big numbers and the influence behavior, be sure to understand what they measured and how, and what decisions they made along the way. In information security, a great many people announce seemingly precise and often scary-sounding numbers that, on investigation, mean far different things than they seem to. (Or, more often, far less.)

And second, despite what I wrote above, it’s not the whole country that emerges. It’s the contiguous 48. Again, watch those definitions, especially for what’s not there.

Previously on Emergent Chaos: Steve Coast’s “Map of London” and “Map of Where Tourists Take Pictures.”

Sex, Lies & Cybercrime Surveys: Getting to Action

My colleagues Dinei Florencio and Cormac Herley have a new paper out, “Sex, Lies and Cyber-crime Surveys.”

Our assessment of the quality of cyber-crime surveys is harsh: they are so compromised and biased that no faith whatever can be placed in their findings. We are not alone in this judgement. Most research teams who have looked at the survey data on cyber-crime have reached similarly negative conclusions.

In the book, Andrew and I wrote “today’s security surveys have too many flaws to be useful as sources of evidence.” Dinei and Cormac were kind enough to cite that, saving me the trouble of looking it up.

I wanted to try here to carve out, perhaps, a small exception. I think of surveys as coming in two main types: surveys of things people know, and surveys of what they think. Both have the potential to be useful (although read the paper for a long list of ways in which they can be problematic.)

So there’s surveys of things people know. For example, what’s your budget, or how many people do you employ? There are people in an organization who know those things, and, starved as we are for knowledge, perhaps they would be useful to know. So maybe a survey makes sense.

But how many people Microsoft employs in security probably doesn’t matter to you. And the average of how many people Boeing, State Farm, Microsoft, Archer Daniels Midland, and Johnson & Johnson employ in security is even less useful. (Neighbors on the Fortune 500 list.) So even in the space that we might want to defend surveys, they’re not that useful.

So our desire for surveys is really evidence of how starved we are for data about outcomes and data about efficacy. We’re like the drunk looking for keys under the lamppost, not because we think the keys are there, but because there’s at least a little light.

So next time someone shows you a survey, don’t even bother to ask them what action they expect you to take, or what decision they expect you to alter, or ask them why you should accept what it says as acceptable arguments for that choice.

Rather, ask them to see the section titled “How we overcame the issues that Dinei and Cormac talked about.” It’ll save everyone a bunch of time.