Elevation of Privilege news

I wanted to let people know that Microsoft is making the source files for the Elevation of Privilege game available. They are Adobe Illustrator and InDesign files, and are now on the EoP download site. They’re the 85mb of zipped goodness. They can be used under the same Creative Commons Attribution 3.0 US license under which we released the game.

If you’re not familiar with it, Elevation of Privilege is the easy way to get started threat modeling, and you can read about it here.

Lazy Sunday, Lazy Linking

Hey, remember when blogging was new and people would sometimes post links instead of making “the $variable Daily” out of tweets?  Well even though I’m newschool with the security doesn’t mean I can’t kick it oldschool every so often.  So here are some links I thought you might enjoy, probably worth discussion and review even if I don’t have time to blog about how I think about the topics discussed in the context of Information Security.


First, in case you haven’t read Gunnar’s article “Reference Monitor For The Internet Of Things (.pdf)” in the latest IQT Quarterly, you really should.  Gunnar smart, Alex head hurt.


The Daily Speculations Blog has an interesting link/blog/discussion about the recent Quantas Airbus problems.   What caught me especially was this point – “Over-riding systematic considerations in favour of discretionary controls”. The actual interview they link to is here.


Here’s an article in FundStrategy webzine called “DefiesLogic” that discusses behavioral economics, biases, market actors and so forth. Ben Hunt (the author) seems a little down on (or at least wants to curb the enthusiasm over) Behavioral Economics.  I’m OK identifying the limitations of any applied tool.


The “computer guy” part of me finds Google Chrome to be interesting.  The “security management / risk guy” in me thinks the platform has  fascinating potential.  Here’s the TechCrunch review of the new laptops.


Awkward Pregnancy Photos.

Doing threat intelligence right

To improve threat intelligence, it’s most important to address the flaws in how we interpret and use the intelligence that we already gather. Intelligence analysts are human beings, and many of their failures follow from intuitive ways of thinking that, while allowing the human mind to cut through reams of confusing information, often end up misleading us.

From a great article by Robert Jervis, professor of international politics at Columbia University:

The problem isn’t usually – or at least isn’t only – too little information, but too much, most of it ambiguous, contradictory, or misleading. The blackboard is filled with dots, many of them false, and they can be connected in innumerable ways. Only with hindsight does the correct pattern leap out at us, and to fix what “broke” the last time around only guarantees you have solved yesterday’s problem.

Far more important, and useful, is to address the flaws in how we interpret and use the intelligence that we already gather. Intelligence analysts are human beings, and many of their failures follow from intuitive ways of thinking that, while allowing the human mind to cut through reams of confusing information, often end up misleading us. This isn’t a problem that occurs only with spying. It is central to how we make sense of our everyday lives, and how we reach decisions based on the imperfect information we have in our hands. And the best way to fix it is to craft policies, institutions, and analytical habits that can compensate for our very understandable flaws.


The first and most important tendency is that our minds are prone to see patterns and meaning in our world quite quickly, and then tend to ignore information that might disprove them. Premature cognitive closure, to use the phrase employed by psychologists, lies behind many intelligence failures.


Second, people pay more attention to visible information than to information generated by an absence. In a famous Arthur Conan Doyle story, it took the extraordinary skill of Sherlock Holmes to see that an important clue in the case was a dog not barking. The equivalent, in the intelligence world, is information that should be there but is not.


Third, conclusions often rest on assumptions that are not readily testable, and may even be immune to disproof.

I’ll add a fourth — ignoring threat intelligence all together or treating it as taboo.  This may take several forms: “it’s beyond our control”, “we don’t have good data”, “it’s too hard to quantify”, “we aren’t paid for guess-work”, “we rely on vendors for that”, “everybody knows what the threats are”, “if we bring it up, we will get too many questions we can’t answer”, or other excuses.  (See Josh Corman’s post on the folly of relying on security vendors for your threat intelligence.  Vendors only have incentive to inform you about threats they can mitigate.)

If you want a good methodology for threat intelligence, look at Intel’s.    It was adapted for use by the Information Technology Sector Coordinating Council in their risk assessment for critical IT industry infrastructure.

As good as it is, it could even be better if they had some systematic methods to actively seek out contradictory information and contrary hypotheses about threats.  One simple way to do this is to create a “Mental Model Red Team” whose primary job is to disprove everything you think you know, or at least generate and validate contrary hypotheses.  (For social and cultural reasons, you should probably rotate your staff through this team rather than keeping the team membership fixed.)    Formal methods exist, including “Analysis of Competing Hypotheses” (slides).  (I’m in the process of evaluating a tool for this called SHEBA.  I hope to have a demo read for Mini-metricon, something like this.)  Another possible method is prediction markets, but I’ve never seen them used for this purpose.

All in the Presentation

America’s Finest News Source teaches an excellent lesson on how to spin data:

Labor Dept: Available Labor Rate Increases To 10.2%

WASHINGTON—In what is being touted by the Labor Department as extremely positive news, the nation’s available labor rate has reached double digits for the first time in 26 years, bringing the total number of potentially employable Americans to an impressive 15.7 million.

Links To Interesting Stuff

I have a ton of tabs open in Firefox about stuff I thought would be some sweet newschool-esque reading for everybody out there.

1.) Threat and Risk Mapping Analysis in Sudan
Not really about measurement and progress, but a fascinating look at “physical risk management” nonetheless:


2.)  I thought Gunnar did a great job on these two posts:

Begin The Begin, Cloud Security : http://1raindrop.typepad.com/1_raindrop/2009/06/begin-the-begin-cloud-security.html

Enterprise Security Priorities : http://1raindrop.typepad.com/1_raindrop/2009/06/enterprise-security-priorities.html

3.)  Simlar to Gunnar’s Security Priorities is this link from CIO mag (it’s pretty dry until the second page, so I linked to that one):

Valuing an IT Service : http://www.cioupdate.com/trends/article.php/11047_3821986_2/How-to-Assign-Value-to-an-IT-Service.htm

4.)  If Physics is simply the act of observing the world around us and building mathematical models to describe it, then here’s a fun little post on Love

from the NYT (SFW): http://judson.blogs.nytimes.com/2009/05/26/guest-column-loves-me-loves-me-not-do-the-math/?em

5.)  Talk about NewSchool in practice, if you’re not subscribing to Chris Hayes Risktical blog, you’re missing out.  Here’s something he did this week that  I really liked:

The Risk Is Right http://risktical.com/2009/05/21/the-risk-is-right/ – one word, hardcore.

6.)  Finally, I’ve often said that even if you hate risk analysis, you’re doing it anyway.  Just in a bad, ad-hoc manner.  Here’s something from Gelman’s blog that suggests that you’re gonna have to eventually be “New School”:

Those who don’t know statistics are doomed to . . . rely on statistics anyway :  http://www.stat.columbia.edu/~cook/movabletype/archives/2009/06/those_who_dont.html It’s even got a Bill James mention!