Quick Quotes For Your Morning

From Krugman (commentary is his):

“Without metrics, you’re just another guy with an opinion. — Stephan Leschka, Hewlett Packard

When I hear words from almost anyone about how their approach is better than some other approach, I think of this quote.

And as Daniel Patrick Moynihan said:

Every man is entitled to his own opinion, but not his own facts”

 

Why Do Outsiders Detect Breaches?

So I haven’t had a chance to really digest the new DBIR yet, but one bit jumped out at me: “86% were discovered by a third party.” I’d like to offer up an explanatory story of why might that be, and muse a little on what it might mean for the deployment of intrusion detection technologies and process.

One common element of third party connections is that they tend to be constrained in various ways including firewalls, structured database queries, and suspicious administrators looking to point fingers. They also, being on trust boundaries, may be better places to deploy and tune an IDS.

And it seems to work, given that 86% of breaches are found in these relatively constrained environments. So what’s the takeaway? Have more partners? Outsourcing is good for security? (I’m not sure if I’m being facetious here.)

It’s hard to deploy IDS within a company (as shown by the 14% of breaches detected internally). A big part of that is that in-company data flows get very complex very quickly. So what to do?

We could throw up our hands and give up, or we could look to see if similar conditions might exist internally at many large organizations. And I think they do. One property of big, complex systems is that they’re hard to manage. Because they’re hard to manage, groups inside a company form service level agreements with other groups to ensure that they have mutual commitments. So perhaps a good rule of thumb would be to deploy IDS near SLAs. (There’s a tie here to Gunnar Peterson’s rule to start from the overall IT budget.)

One of the points that Andrew and I made in the book is that data isn’t enough. We all benefit from different perspectives and interpretations of that data. What do you think? What should we learn from the fact that almost all breaches are currently detected by third parties?

Data driven pen tests

So I’m listening to the “Larry, Larry, Larry” episode of the Risk Hose podcast, and Alex is talking about data-driven pen tests. I want to posit that pen tests are already empirical. Pen testers know what techniques work for them, and start with those techniques.

What we could use are data-driven pen test reports. “We tried X, which works in 78% of attempts, and it failed.”

We could also use more shared data about what tests tend to work.

Thoughts?

Why Do You Write The Way You Do?

Hey Kids,

Reader Mark Wallace wrote in a comment to the blog yesterday, and I wanted to answer the comment in an actual blog post. So here goes:

Mark,

Thanks for reading! There’s a point where publicly writing forces me to answer a few questions that I’m not ready to make a quick decision on. I appreciate feedback like this because I really haven’t made up my mind on how to answer those questions.

First, do I leave out the references for the fundamental reasons that shape my beliefs (Complex Adaptive Systems, what Frank Knight believes, what a frequentist is) or do I leave them in for the curious?

I’ve made a conscious decision to leave them in, because there are going to be a number of people on a journey just like I find myself on, and they’ll need breadcrumbs, clues about what I studied and why my thought processes are the way they are. Be that to agree, disagree, or come up with a totally new perspective.  So, it’s not meant to be condescending or pedantic, it’s meant to inspire trips to wikipedia and Amazon. It’s an attempt to teach those who want to learn to fish, how I learned to catch fish – to abuse the old saw. Alternately, it’s to attract those with differing or stronger perspectives to educate me.  I’m not a bass master, if you will and I’m always seeking a better way to be a good fisherman.

Related, the second is – how much time to I spend crafting a perfect blog post vs. putting stuff up?

Last year I went through a period where I hardly blogged at all – mainly because I would try to do something “perfect.” I missed blogging during this dry spell and believe it or not, others told me they missed it too. So I decided to “just blog it.” Sometimes this means I’m too hurried with my day job to bother putting in links to wikipedia or amazon or whatever (let me google that for you), sometimes it means I forget to put “(CAS)” in after I type “complex adaptive system” and just refer to the acronym later on. So it’s not really smarter, just stupider.

 

Finally, at what point is blogging different that book writing?  Where is TL:DNR?

Let’s face it, we’re in the Twitter age.  Even this post is “TL:DNR” for many.   I always feel like I’ll write until I’ve said what I can say.  If you don’t want to read it due to length and your busy schedule, that’s fine. I totally understand.

But I do feel limited in what I can write here by pressure from family, non-security commitments ( like SIRA) or by work itself.  So as such I can’t go into depth into some areas of background, and have to trust that either we’re on the same page, or if you’re curious (see #1) you’re willing to get there.
In conclusion, Mark – I’m always up for direction and criticism, feedback and your opinion.  And I sincerely thank you for taking the time & effort to send me your comment.  I hope this explains to you and others kind of where I’m at in this.

What is Risk (again)?

The thread “What is Risk?came up on a linkedin Group. Thought you might enjoy my answer:

———————-

Risk != uncertainty (unless you’re a Knightian frequentist, and then you don’t believe in measurement anyway), though if you were to account for risk in an equation, the amount of uncertainty would be a factor.

risk != “likelihood” (to a statistician or probabilist anyway). Like uncertainty, likelihood has a specific meaning.

What is risk? It’s a hypothetical construct, something we synthesize in our brains to describe the danger inherent in the various inputs we’re processing around a certain situation. Depending on the situation, it can be either very difficult to create an “immaculate” equation for risk (such as R = T x V x I), or in many cases, impossible.

As an example, in IT, and especially for a large enterprise, we may have a complex adaptive system with characteristics of strong emergence. We see the same thing in medicine and various fields of biology. As such, point probabilities are pretty much impossible, or require so much simulation effort as to be difficult to produce.

Also, because it is a hypothetical construct we create in our brains, risk is also subject to the perspective of the observer. It is poly-centric. And because humans have a very difficult time divorcing their own risk tolerance derived from their own internal ad-hoc assessment from the information they have and the way they’re required to use that information, the true nature of the inherent danger, the majority of the “risk” is left unexpressed.

In my opinion, it’s important to dwell on these two pieces of information (risk may apply to a CAS with strong emergence, risk is subjective to the viewer) because they explain why the information security bureaucracies (ISACA, the ISO, NIST, most standards bodies, in fact) do us a huge disservice.

First, what our standards bodies do is typically do is enable us to justify our perspective by manipulating the inputs into a completely false model (jet engine x peanut butter = shiny!). This is the first significant way we give false (or at best, such poor information as to be incapable of creating a state of knowledge) information to decision makers.

Second, standards bodies, in the rush to provide value through “certification” have prematurely standardized processes to do “risk management.” This is the second way we are left giving false information to decision makers. Standardization without acknowledging the nature of risk (CAS, emergence, poly-centrism) results in the analyst ignoring critical pieces of the complex system that certainly contribute (sometimes significantly) to a full understanding of the situation.

Bottom line, IT risk is something created without being understood. It is the most important concept in information security, and the most abused. Until we have data, evidence of significant quality (see evidence-based practices) we cannot derive sane models, we cannot begin to understand the problem space.

As such, “risk” probably encompasses all of the above statements made in this thread, while in truth not resembling them at all (1).

————
1.) The thread was full of people explaining their “likelihood x impact” models. Variations on a theme, mainly.

What's the PIN, Kenneth?

There’s a story in the New York Times, “To Get In, Push Buttons, or Maybe Swipe a Magnet” which makes interesting allusions to the meaning of fair trade in locks, implied warranties and the need for empiricism in security:

In court filings, Kaba argued that it had “never advertised or warranted in any way that any of its access control products are impenetrable.” Locksmiths learn techniques to defeat all kinds of locks, and “thieves and others who want to defeat locks can obtain the same tools and learn the same techniques locksmiths use,” the filings said. “Indeed, any thief — even the most clumsy — can use a sledgehammer, a pry bar or bolt cutter to bypass essentially any lock.”

In a statement, Mr. Miller added that the company had “never received any confirmed report of a break-in” because of a magnetic bypass, and that it heard about the potential for magnetic mischief only in August 2010. Kaba is preparing a free kit to modify the locks and make them magnet-proof, he said.

All of which is really an excuse to share with you this picture. I have no idea if it’s a Kaba lock or not, and I’m reasonably confident that the sign is not Kaba’s fault.
IMG 0356