Shostack + Friends Blog Archive

 

You say noise, I say data

There is a frequent claim that stock markets are somehow irrational and unable to properly value the impact of cyber incidents in pricing. (That’s not usually precisely how people phrase it. I like this chart of one of the largest credit card breaches in history: It provides useful context as we consider this quote: On […]

 

Security Lessons From Star Wars: Breach Response

To celebrate Star Wars Day, I want to talk about the central information security failure that drives Episode IV: the theft of the plans. First, we’re talking about really persistent threats. Not like this persistence, but the “many Bothans died to bring us this information” sort of persistence. Until members of Comment Crew are going […]

 

Exploit Kit Statistics

On a fairly regular basis, I come across pages like this one from SANS, which contain fascinating information taken from exploit kit control panels: There’s all sorts of interesting numbers in that picture. For example, the success rate for owning XP machines (19.61%) is three times that of Windows 7. (As an aside, the XP […]

 

Analyzing The Army's Accidental Test

According to Wired, “Army Practices Poor Data Hygiene on Its New Smartphones, Tablets.” And I think that’s awesome. No, really, not the ironic sort of awesome, but the awesome sort of awesome, because what the Army is doing is a large scale natural experiment in “does it matter?” Over the next n months, the Pentagon’s […]

 

Breach Analysis: Data Source biases

Bob Rudis has an fascinating and important post “Once More Into The [PRC Aggregated] Breaches.” In it, he delves into the various data sources that the Privacy Rights Clearinghouse is tracking. In doing so, he makes a strong case that data source matters, or as Obi-Wan said, “Luke, you’re going to find that many of […]

 

The Fog of Reporting on Cyberwar

There’s a fascinating set of claims in Foreign Affairs “The Fog of Cyberward“: Our research shows that although warnings about cyberwarfare have become more severe, the actual magnitude and pace of attacks do not match popular perception. Only 20 of 124 active rivals — defined as the most conflict-prone pairs of states in the system […]

 

Published Data Empowers

There’s a story over at Bloomberg, “Experian Customers Unsafe as Hackers Steal Credit Report Data.” And much as I enjoy picking on the credit reporting agencies, what I really want to talk about is how the story came to light. The cyberthieves broke into an employee’s computer in September 2011 and stole the password for […]

 

Base Rate & Infosec

At SOURCE Seattle, I had the pleasure of seeing Jeff Lowder and Patrick Florer present on “The Base Rate Fallacy.” The talk was excellent, lining up the idea of the base rate fallacy, how and why it matters to infosec. What really struck me about this talk was that about a week before, I had […]

 

Aitel on Social Engineering

Yesterday, Dave Aitel wrote a fascinating article “Why you shouldn’t train employees for security awareness,” arguing that money spent on training employees about awareness is wasted. While I don’t agree with everything he wrote, I submit that your opinion on this (and mine) are irrelevant. The key question is “Is money spent on security awareness […]

 

The Evolution of Information Security

A little while back, a colleague at the NSA reached out to me for an article for their “Next Wave” journal, with a special topic of the science of information security. I’m pleased with the way the article and the entire issue came out, and so I’m glad that the NSA has decided to release […]

 

Why Sharing Raw Data is Important

Bob Rudis has a nice post up “Off By One : The Importance Of Fact Checking Breach Reports,” in which he points out some apparent errors in the Massachusetts 2011 breach report, and also provides some graphs. Issues like this are why it’s important to release data. It enables independent error checking, but also allows […]

 

Sharing Research Data

I wanted to share an article from the November issue of the Public Library of Science, both because it’s interesting reading and because of what it tells us about the state of security research. The paper is “Willingness to Share Research Data Is Related to the Strength of the Evidence and the Quality of Reporting […]

 

The New School of Software Engineering?

This is a great video about how much of software engineering runs on folk knowledge about how software is built: “Greg Wilson – What We Actually Know About Software Development, and Why We Believe It’s True” There’s a very strong New School tie here. We need to study what’s being done and how well it […]

 
 

Diginotar Quantitative Analysis ("Black Tulip")

Following the Diginotar breach, FOX-IT has released analysis and a nifty video showing OCSP requests. As a result, lots of people are quoting a number of “300,000”. Cem Paya has a good analysis of what the OCSP numbers mean, what biases might be introduced at “DigiNotar: surveying the damage with OCSP.” To their credit, FoxIt […]

 

Securosis goes New School

The fine folks at Securosis are starting a blog series on “Fact-based Network Security: Metrics and the Pursuit of Prioritization“, starting in a couple of weeks.  Sounds pretty New School to me!  I suggest that you all check it out and participate in the dialog.  Should be interesting and thought provoking. [Edit — fixed my […]

 

Fixes to Wysopal’s Application Security Debt Metric

In two recent blog posts (here and here), Chris Wysopal (CTO of Veracode) proposed a metric called “Application Security Debt”.  I like the general idea, but I have found some problems in his method.  In this post, I suggest corrections that will be both more credible and more accurate, at least for half of the […]

 

Infosec's Flu

In “Close Look at a Flu Outbreak Upends Some Common Wisdom,” Nicholas Bakalar writes: If you or your child came down with influenza during the H1N1, or swine flu, outbreak in 2009, it may not have happened the way you thought it did. A new study of a 2009 epidemic at a school in Pennsylvania […]

 
 

Another critique of Ponemon's method for estimating 'cost of data breach'

I have fundamental objections to Ponemon’s methods used to estimate ‘indirect costs’ due to lost customers (‘abnormal churn’) and the cost of replacing them (‘customer acquisition costs’). These include sloppy use of terminology, mixing accounting and economic costs, and omitting the most serious cost categories.

 

A critique of Ponemon Institute methodology for "churn"

Both Dissent and George Hulme took issue with my post Thursday, and pointed to the Ponemon U.S. Cost of a Data Breach Study, which says: Average abnormal churn rates across all incidents in the study were slightly higher than last year (from 3.6 percent in 2008 to 3.7 percent in 2009), which was measured by […]

 

Gunnar on Heartland

Analysis of Heartland’s business as a going concern by @oneraindrop. Especially interesting after comments on the CMO video.

 

Estimating spammer's technical capabilities and pathways of innovation

I’d like some feedback on my data analysis, below, from anyone who is an expert on spam or anti-spam technologies. I’ve analyzed data from John Graham-Cumming’s “Spammers’ Compendium” to estimate the technical capabilities of spammers and the evolution path of innovations.

 

Lessons from HHS Breach Data

PHIPrivacy asks “do the HHS breach reports offer any surprises?” It’s now been a full year since the new breach reporting requirements went into effect for HIPAA-covered entities. Although I’ve regularly updated this blog with new incidents revealed on HHS’s web site, it might be useful to look at some statistics for the first year’s […]

 

Fair Warning: I haven't read this report, but…

@pogowasright pointed to “HOW many patient privacy breaches per month?:” As regular readers know, I tend to avoid blogging about commercial products and am leery about reporting results from studies that might be self-serving, but a new paper from FairWarning has some data that I think are worth mentioning here. In their report, they provide […]

 
 

Dating and InfoSec

So if you don’t follow the folks over at OKCupid, you are missing out on some hot data. In case you’re not aware of it, OKCupid is: the best dating site on earth. Compiling our observations and statistics from the hundreds of millions of user interactions we’ve logged, we use this outlet to explore the […]

 

Alex on Science and Risk Management

Alex Hutton has an excellent post on his work blog: Jim Tiller of British Telecom has published a blog post called “Risk Appetite, Counting Security Calories Won’t Help”. I’d like to discuss Jim’s blog post because I think it shows a difference in perspectives between our organizations. I’d also like to counter a few of […]

 

Getting the time dimension right

If you are developing or using security metrics, it’s inevitable that you’ll have to deal with the dimension of time. “Data” tells you about the past. “Security” is a judgement about the present. “Risk” is a cost of the future, brought to the present. The way to marry these three is through social learning processes.

 

Source, Data or Methodology: Pick at least one

In the “things you don’t want said of your work” department, Ars Technica finds these gems in a GAO report: This estimate was contained in a 2002 FBI press release, but FBI officials told us that it has no record of source data or methodology for generating the estimate and that it cannot be corroborated…when […]

 

Friday Visualization: Wal-mart edition

I’ve seen some cool Walmart visualizations before, and this one at FlowingData is no exception. The one thing I wondered about as I watched was if it captured store closings–despite the seemingly inevitable march in the visualization, there have been more than a few.

 

On Uncertain Security

One of the reasons I like climate studies is because the world of the climate scientist is not dissimilar to ours.  Their data is frought with uncertainty, it has gaps, and it might be kind of important (regardless of your stance of anthropomorphic global warming, I think we can all agree that when the climate […]

 

Data void: False Positives

A Gartner blog post points out the lack of data reported by vendors or customers regarding the false positive rates for anti-spam solutions. This is part of a general problem in the security industry that is a major obstical to rational analysis of effectiveness, cost-effectiveness, risk, and the rest

 

Symantec State of Security 2010 Report Out

http://www.symantec.com/content/en/us/about/presskits/SES_report_Feb2010.pdf Thanks to big yellow for not making us register!  Oh, and Adam thanks you for not using pie charts…

 

The Visual Display of Quantitative Information

In Verizon’s post, “A Comparison of [Verizon’s] DBIR with UK breach report,” we see: Quick: which is larger, the grey slice on top, or the grey slice on the bottom? And ought grey be used for “sophisticated” or “moderate”? I’m confident that both organizations are focused on accurate reporting. I am optimistic that this small […]

 

Does It Matter If The APT Is "New"?

As best as I can describe the characteristics of the threat agents that would fit the label of APT, that threat community is very, very real.  It’s been around forever (someone mentioned first use of the term being 1993 or something) – we dealt with threat agents you would describe as “APT” at MicroSovled when […]

 

NotObvious On Heartland

I posted this also to the securitymetrics.org mailing list.  Sorry if discussing in multiple  venues ticks you off. The Not Obvious blog has an interesting write up on the Heartland Breach and impact.  From the blog post: “Heartland has had to pay other fines to Visa and MasterCard, but the total of $12.6 million they […]

 

Chris Soghoian’s Surveillance Metrics

I also posted about this on Emergent Chaos, but since our readership doesn’t fully overlap, I’m commenting on it here as well. Chis Soghoian, has just posted some of his new research into government electronic surveillance here in the US. The numbers are truly astounding (Sprint for instance provided geo-location data on customers eight million […]

 

For Those Not In The US (or even if you are)

I’d like to wish US readers a happy Thanksgiving. For those outside of the US, I thought this would be a nice little post for today: A pointer to an article in the Financial Times, “Baseball’s love of statistics is taking over football“ Those who indulge my passion for analysis and for sport know that […]

 

Rational Ignorance: The Users' view of security

Cormac Herley at Microsoft Research has done us all a favor and released a paper So Long, And No Thanks for the Externalities:  The Rational Rejection of Security Advice by Users which opens its abstract with: It is often suggested that users are hopelessly lazy and unmotivated on security questions. They chose weak passwords, ignore […]

 

How to Value Digital Assets (Web Sites, etc.)

If you need to do financial justification or economic analysis for information security, especially risk analysis, then you need to value digital assets to some degree of precision and accuracy. There is no unversally applicable and acceptable method. This article presents a method that will assist line-of-business managers to make economically rational decisions consistent with overall enterprise goals and values.

 

Botnet Research

Rob Lemos has a new article up on the MIT Technology Review, about some researchers from UC Santa Barbara who spent several months studying the Mebroot Botnet. They found some fascinating stuff and I’m looking forward to reading the paper when it’s finally published. While the vast majority of infected machines were Windows based (64% […]

 

Making Sense of the SANS "Top Cyber Security Risks" Report

The SANS Top Cyber Security Risks report has received a lot of positive publicity. I applaud the effort and goals of the study and it may have some useful conclusions. We should have more of this. Unfortunately, the report has some major problems. The main conclusions may be valid but the supporting analysis is either confusing or weak. It would also be good if this study could be extended by adding data from other vendors and service providers.

 

12 Tips for Designing an InfoSec Risk Scorecard (its harder than it looks)

An “InfoSec risk scorecard” attempts to include all the factors that drive information security risk – threats, vulnerabilities, controls, mitigations, assets, etc. But for the sake of simplicity, InfoSec risk scorecards don’t include any probabilistic models, causal models, or the like. It can only roughly approximate it under simplifying assumptions. This leaves the designer open to all sorts of problems. Here are 12 tips that can help you navigate these difficulty. It’s harder than it looks.

 

Statistics Police?!

From Gelman’s blog: U.K. Sheriff Cites Officials for Serious Statistical Violations I don’t know if we need an “office” of information assurance in the government sector, but it would be nice to have some penalty on the books for folks who abuse basic common sense statistical principles. Of course, the *real* answer lies in education […]

 

TAKE PART IN PROJECT QUANT (please)!

Hey everyone.  I wanted to let you know that Rich, Adrian & Co. at Securosis are spearheading a research project  called “Quant”.  They currently have a survey up on survey monkey about Patch Management that they’d like participation in.  If you can, please give thoughtful contribution to the survey. http://www.surveymonkey.com/s.aspx?sm=SjehgbiAl3mR_2b1gauMibQw_3d_3d There’s something about a registration […]

 

Comment pointer

Mike Cook, author of the ID Analytics report referred to in a recent Breach Tidbit post, has responded in the comments.