Shostack + Friends Blog Archive

 

By looking for evidence first, the Brits do it right

As it happens, both the US Government and the UK government are leading “cyber security standards framework” initiatives right now.  The US is using a consensus process to “incorporate existing consensus-based standards to the fullest extent possible”, including “cybersecurity standards, guidelines, frameworks, and best practices” and “conformity assessment programs”. In contrast, the UK is asking […]

 

Indicators of Impact — Ground Truth for Breach Impact Estimation

One big problem with existing methods for estimating breach impact is the lack of credibility and reliability of the evidence behind the numbers. This is especially true if the breach is recent or if most of the information is not available publicly.  What if we had solid evidence to use in breach impact estimation?  This […]

 

New paper: "How Bad Is It? — A Branching Activity Model for Breach Impact Estimation"

Adam just posted a question about CEO “willingness to pay” (WTP) to avoid bad publicity regarding a breach event.  As it happens, we just submitted a paper to Workshop on the Economics of Information Security (WEIS) that proposes a breach impact estimation method that might apply to Adam’s question.  We use the WTP approach in a […]

 

Securosis goes New School

The fine folks at Securosis are starting a blog series on “Fact-based Network Security: Metrics and the Pursuit of Prioritization“, starting in a couple of weeks.  Sounds pretty New School to me!  I suggest that you all check it out and participate in the dialog.  Should be interesting and thought provoking. [Edit — fixed my […]

 

Fixes to Wysopal’s Application Security Debt Metric

In two recent blog posts (here and here), Chris Wysopal (CTO of Veracode) proposed a metric called “Application Security Debt”.  I like the general idea, but I have found some problems in his method.  In this post, I suggest corrections that will be both more credible and more accurate, at least for half of the […]

 

Is Norton Cybercrime Index just 'Security Metrics Theater'?

Symantec’s new Norton Cybercrime Index looks like it is mostly a marketing tool. They present it as though there is solid science, data, and methods behind it, but an initial analysis shows that this is probably not the case. The only way to have confidence in this is if Symantec opens up about their algorthms and data.

 

Would a CISO benefit from an MBA education?

If a CISO is expected to be an executive officer (esp. for a large, complex technology- or information-centered organization), then he/she will need the MBA-level knowledge and skill. MBA is one path to getting those skills, at least if you are thoughtful and selective about the school you choose. Other paths are available, so it’s not just about an MBA credential.

Otherwise, if a CISO is essentially the Most Senior Information Security Manager, then MBA education wouldn’t be of much value.

 

Another critique of Ponemon's method for estimating 'cost of data breach'

I have fundamental objections to Ponemon’s methods used to estimate ‘indirect costs’ due to lost customers (‘abnormal churn’) and the cost of replacing them (‘customer acquisition costs’). These include sloppy use of terminology, mixing accounting and economic costs, and omitting the most serious cost categories.

 

Dashboards are Dumb

The visual metaphor of a dashboard is a dumb idea for management-oriented information security metrics. It doesn’t fit the use cases and therefore doesn’t support effective user action based on the information. Dashboards work when the user has proportional controllers or switches that correspond to each of the ‘meters’ and the user can observe the effect of using those controllers and switches in real time by observing the ‘meters’. Dashboards don’t work when there is a loose or ambiguous connection between the information conveyed in the ‘meters’ and the actions that users might take. Other visual metaphors should work better.

 

Estimating spammer's technical capabilities and pathways of innovation

I’d like some feedback on my data analysis, below, from anyone who is an expert on spam or anti-spam technologies. I’ve analyzed data from John Graham-Cumming’s “Spammers’ Compendium” to estimate the technical capabilities of spammers and the evolution path of innovations.

 
 

GAO report on the state of Federal Cyber Security R&D

This GAO Report is a good overall summary of the state of Federal cyber security R&D and why it’s not getting more traction.    Their recommendations (p22) aren’t earth-shaking: “…we are recommending that the Director of the Office of Science and Technology Policy, in conjunction with the national Cybersecurity Coordinator, direct the Subcommittee on Networking and […]

 

Getting the time dimension right

If you are developing or using security metrics, it’s inevitable that you’ll have to deal with the dimension of time. “Data” tells you about the past. “Security” is a judgement about the present. “Risk” is a cost of the future, brought to the present. The way to marry these three is through social learning processes.

 

"Cyber Economic Incentives" is one of three themes at Federal Cybersecurity R&D Kickoff Event

This event will be the first discussion of these Federal cybersecurity R&D objectives and will provide insights into the priorities that are shaping the direction of Federal research activities. One of the three themes is “Cyber economic incentives — foundations for cyber security markets, to establish meaningful metrics, and to promote economically sound secure practices.”

 

A personal announcement

I will be entering the PhD program in Computational Social Science (with certificates in InfoSec and Economic Systems Design) at George Mason University, Fairfax VA, starting in the Fall of 2010.

 

'Experts' misfire in trying to shoot down Charney's 'Internet Security Tax' idea

Industry ‘experts’ misfired when they criticized Microsoft’s Scott Chareney’s “Internet Security Tax” idea. Q: How many of these ‘experts’ know any thing about information economics and public policy responses to negative externalities? A: Zero. Thus, they aren’t really qualified to comment. This is just one small case in the on-going public policy discussions regarding economics of information security, but given the reaction of the ‘experts’, this was a step backward.

 

Data void: False Positives

A Gartner blog post points out the lack of data reported by vendors or customers regarding the false positive rates for anti-spam solutions. This is part of a general problem in the security industry that is a major obstical to rational analysis of effectiveness, cost-effectiveness, risk, and the rest

 

Everybody complains about lack of information security research, but nobody does anything about it

There has been a disconnect between the primary research sectors and a lack of appropriate funding in each is leading to decreased technological progress, exposing a huge gap in security that is happily being exploited by cybercriminals. No one seems to be able to mobilize any signficant research into breakthrough cyber security solutions. It’s been very frustrating to see so much talk and so little action. This post proposes one possible solution: Information Security Pioneers Fellowship Program (ISPFP), similar to Gene Spafford’s proposal for a Information Security and Privacy Extended Grant (ISPEG) for academic researchers.

 

Measuring the unmeasurable — inspiration from baseball

The New School approach to information security promotes the idea that we can make better security decisions if we can measure the effectiveness of alternatives.  Critics argue that so much of information security is unmeasurable, especially factors that shape risk, that quantitative approaches are futile.  In my opinion, that is just a critique of our current methods […]

 

'Don't Ask, Don't Tell in Davos' — Act 3 in the Google-China affair

There is no better illustration of the institutional and social taboos surrounding data breach reporting and information security in general than the Google-Adobe-China affair. While the Big Thinkers at the World Economic Forum discussed every other idea under the sun, this one was taboo.

 

The Face of FUD

A vivid image of Fear, Uncertainty, and Doubt (FUD), from an email promotion by NetWitness.

 

Doing threat intelligence right

To improve threat intelligence, it’s most important to address the flaws in how we interpret and use the intelligence that we already gather. Intelligence analysts are human beings, and many of their failures follow from intuitive ways of thinking that, while allowing the human mind to cut through reams of confusing information, often end up misleading us.

 
 

Emerging threat: Social Botnets

We think of botnets as networks of computing devices slaved to some command & control system. But what about human-in-the-loop botnets, where humans are either participants or prime actors? I’m coining this label: “social botnets”. Recent example: “Health Insurers Caught Paying Facebook Gamers To Oppose Reform Bill”.

 

NEW: Verizon 2009 DBIR Supplement

The supplement provides case studies, involving anonymous Verizon clients, that detail some of the tools and methods hackers used to compromise the more than 285 million sensitive records that were breached in 90 forensic cases Verizon handled last year.

 

Manditory web client scripts analogous to CDOs

The widespread and often mandatory use of client scripts in websites (e.g., JavaScript) are like CDOs [Collateralized Debt Obligations}. They both are designed by others with little interest in your security, they leverage your resources for their benefit, they are opaque, complex, nearly impossible to audit, and therefore untrustworthy.

 

Time to update your threat model to include "friendly fire"

If you work in InfoSec outside of the military, you may be thinking that “offensive cyber capability” don’t doesn’t apply to you. Don’t be so sure. I think it’s worth adding to the threat model for every organization. New “hacking gadgets” could be put in the hands of ordinary soldiers, turning them into the equivalent of “script kiddies”. But what if the potential target knows that such attacks may be coming. They could sets up a deceptive defense and redirect the attack to another network

 
 

Miscommunicating risks to teenagers

A lesson in miscommunication of risk from “abstinence only” sex education aimed at teenagers. The educators emphasize the failure rate of condoms, but never mention the failure rate of abstinence-only policies when implemented by teenagers.

 
 

Hackers treated as credible sources of information (D'oh!)

Contrary to popular belief, hackers are not credible sources of information that they themselves have stolen and leaked. Maybe they weren’t “hackers” at all. News organizations and bloggers should think more critically and do more investigation before they add to the “echo chamber effect” for such reports.

 
 

Can't tell the players without a program

You can’t tell the good guys from the bad guys without knowing the color of their hat. I wish there were some sort of map of the Black Hat ecosystem because it’s hard for non-specialists to tell. Case in point: Virscan.org. Looks like a nice, simple service that scan uploaded files using multiple AV software with latest signatures. But it seems *much* more useful to bad guys (malware writers and distributors) than for good guys. Who does it serve?

 

CFP: 9th Workshop on the Economics of Information Security (WEIS)

The Workshop on the Economics of Information Security (WEIS) is the leading forum for interdisciplinary scholarship on information security, combining expertise from the fields of economics, social science, business, law, policy and computer science.

 

On smelly goats, unicorns, and FUD

Unicorns (of some sort) are not impossible in principle, only non-existent in recent times. As evidence, I offer Tsintaosaurus spinorhinus, a real dinosaur found in China. Though we may be comfortable with our current “smelly, ugly goat” practices, including the ethically questionable FUD tactic, they only perpetuate the problems and, at worst, are like peeing in the swimming pool.

 

Apologies to Richard Bejtlich

The previous blog post, “Just say ‘no’ to FUD”, described Richard Bejtlich’s post at Tao of Security as “FUD in other clothing”. That was over-reaching. I apologize. There was an element of FUD, but my main objection to Richard’s post was due to other reasons.

 

Just say 'no' to FUD

“Fear, uncertainty, and doubt” (FUD) is a distortion tactic to manipulate decision-makers. You may think it’s good because it can be successful in getting the outcomes you desire. But it’s unethical. FUD is also anti-data and anti-analysis. Don’t do it. It’s the opposite of what we need.

 

On the value of 'digital asset value' for security decisions

What good is it to know the economic value of a digital asset for the purposes of making information security decisions? If you can’t make better decisions with this information, then the metric doesn’t have any value. This post discusses alternative uses, especially threshold or sanity checks on security spending. For these purposes, it functions better as a “spotlight” than as a “razor”. Digital Asset Value has other uses, not the least to get InfoSec people to understand Business people and their priorites and vice versa.

 

How to Value Digital Assets (Web Sites, etc.)

If you need to do financial justification or economic analysis for information security, especially risk analysis, then you need to value digital assets to some degree of precision and accuracy. There is no unversally applicable and acceptable method. This article presents a method that will assist line-of-business managers to make economically rational decisions consistent with overall enterprise goals and values.

 

Visual Complexity Web Site

VisualComplexity.com intends to be a unified resource space for anyone interested in the visualization of complex networks. While it may not contain any examples specific to information security, there may be some methods and ideas that can be adapted to InfoSec.

 

The Cost of a Near-Miss Data Breach

Near misses are very valuable signals regarding future losses. If we ignore them in our cost metrics, we might make some very poor decisions. This example shows that there is a qualitative difference between “ground truth data” (in this case, historical cash flow for data breach events) and overall security metrics, which need to reflect our estimates about the future, a.k.a. risk.

 
 
 

National Cyber Leap Year Summit reports now available

I believe these are the final deliverables: National Cyber Leap Year Summit 2009 Co-Chairs Report — main discussion of metrics is p 26-28 National Cyber Leap Year Summit 2009 Participants’ Ideas Report – main discussion of metrics is p 44-46, p 50-51, and p 106; with related discussion on p 53-54. Also worth noting is […]

 

Making Sense of the SANS "Top Cyber Security Risks" Report

The SANS Top Cyber Security Risks report has received a lot of positive publicity. I applaud the effort and goals of the study and it may have some useful conclusions. We should have more of this. Unfortunately, the report has some major problems. The main conclusions may be valid but the supporting analysis is either confusing or weak. It would also be good if this study could be extended by adding data from other vendors and service providers.

 

Visualization Friday – Improving a Bad Graphic

We can learn from bad visualization examples by correcting them. This example is from the newly released SANS “Top Cyber Security Risks” report. Their first graphic has a simple message, but due to various misleading visual cues, it’s confusing. A simplified graphic works much better, but they probably don’t need a graphic at all — a bulleted list works just as well. Moral of this story: don’t simply hand your graphics to a designer with the instructions to “make this pretty”. Yes, the resulting graphic may be pretty, but it may lose its essential meaning or it might just be more confusing than enlightening. Someone has to take responsibility for picking the right visualization metaphor and structures.

 

12 Tips for Designing an InfoSec Risk Scorecard (its harder than it looks)

An “InfoSec risk scorecard” attempts to include all the factors that drive information security risk – threats, vulnerabilities, controls, mitigations, assets, etc. But for the sake of simplicity, InfoSec risk scorecards don’t include any probabilistic models, causal models, or the like. It can only roughly approximate it under simplifying assumptions. This leaves the designer open to all sorts of problems. Here are 12 tips that can help you navigate these difficulty. It’s harder than it looks.

 

This Friday is “Take an Academic Friend to Work Day”

We need more cross-disciplinary research and collaboration in InfoSec. We start on a small scale, starting with people in our professional network. One fertile area of research and collaboration is to apply the latest research in non-standard logic and formal reasoning (a.k.a. AI) to InfoSec risk management problems. The problem is that most of that research reads like Sanskrit unless you are a specialist. Rather than simply post links to academic papers and ask you to read them, let’s use these papers as a vehicle to start a dialog with an academic friend, or a friend-of-friends. Maybe there are some breakthrough ideas in here. Maybe not. Either way, you will have an interesting experience in cross-discipline collaboration on a small scale.

 

Is risk management too complicated and subtle for InfoSec?

Luther Martin, blogger with Voltage Security, has advised caution about using of risk risk management methods for information security, saying it’s “too complicated and subtle” and may lead decision-makers astray. To backup his point, he uses the example of the Two Envelopes Problem in Bayesian (subjectivist) probability, which can lead to paradoxes. Then he posed an analogous problem in information security, with the claim that probabilistic analysis would show that new security investments are unjustified. However, Luther made some mistakes in formulating the InfoSec problem and thus the lessons from Two Envelopes Problem don’t apply. Either way, a reframing into a “possible worlds” analysis resolves the paradoxes and accurately evaluates the decision alternatives for both problems. Conclusion: risk management for InfoSec is complicated and subtle, but that only means it should be done with care and with the appropriate tools, methods, and frameworks. Unsolved research problems remain, but the Two Envelopes Problem and similar are not among them.

 

National Cyber Leap Year: Without a Good Running Start, There Might Be No Leap

The National Cyber Leap Year (NCLY) report coming out in a few weeks might lead to more US government research funding for security metrics in coming years. But that depends on whether the report is compelling to the Feds and Congress. Given the flawed process leading up to the Summit, I have my doubts. Clearly, this NCLY process is not a good model for public-private collaboration going forward.