Best Practices for the Lulz

The New School blog will shortly be publishing a stunning expose of Anonymous, and before we do, we’re looking for security advice we should follow to ensure our cloud-hosted blog platform isn’t pwned out the wazoo. So, where’s the checklist of all best practices we should be following?

What’s that you say? There isn’t a checklist? Then how are we supposed to follow advice like this:

So there are clearly two lessons to be learned here. The first is that the standard advice is good advice. If all best practices had been followed then none of this would have happened. Even if the SQL injection error was still present, it wouldn’t have caused the cascade of failures that followed.

So please, if you’re going to advocate for best practice security, please provide a list so we can test what you say. Otherwise, I worry that someone, somewhere will have declared something else a best practice, and your hindsight will be 20/20.

Incidentally, the opening sentence is a lie. Attacking this blog is probably like kicking stoned puppies. Even though we do try to ensure it’s up to date with patches and use strong passwords, we selected the blog hosting company on a diverse set of criteria which included cost effectiveness for our hobby blog.

Previously on best practices:

Finally, there’s a little more context below the fold.

Continue reading “Best Practices for the Lulz”

Infosec's Flu

In “Close Look at a Flu Outbreak Upends Some Common Wisdom,” Nicholas Bakalar writes:

If you or your child came down with influenza during the H1N1, or swine flu, outbreak in 2009, it may not have happened the way you thought it did.

A new study of a 2009 epidemic at a school in Pennsylvania has found that children most likely did not catch it by sitting near an infected classmate, and that adults who got sick were probably not infected by their own children.

Closing the school after the epidemic was under way did little to slow the rate of transmission, the study found, and the most common way the disease spread was a through child’s network of friends.

The work he discusses is “Role of social networks in shaping disease transmission during a community outbreak of 2009
H1N1 pandemic influenza
” by Simon Cauchemeza, Achuyt Bhattaraib, Tiffany L. Marchbanksc, Ryan P. Faganb, Stephen Ostroffc, Neil M. Fergusona, David Swerdlowb, and the Pennsylvania H1N1 working group.

The first thing that comes to mind is that closing schools is a best practice. It’s something that makes so much sense that it’s hard to argue against, even if it does no good. The next thing is look at what happens when they have data available to them. They can study their prescriptions and test to see if they did any good. But note how detailed the data is: social graphs, seating charts. This isn’t something we would obviously get from more detailed breach notices. It’s going to require in-depth investigations, and investigators who talk about their methods. VERIS is a step in this direction, and I’m looking forward to seeing critiques or even competitors that can help us move forward and learn.

But the data we have is the data we have, and while we work to get more, there’s a good deal that we can probably learn from what’s out there. We just have to be willing to ask if our practices really work.

Referencing Insiders is a Best Practice

You might argue that insiders are dangerous. They’re dangerous because they’re authorized to do things, and so monitoring throws up a great many false positives, and raises privacy concerns. (As if anyone cared about those.) And everyone in information security loves to point to insiders as the ultimate threat.

I’m tempted to claim this as a nail in the coffin for the insider as the most important threat vector, but of late, I’ve decided that the insider is an near-unkillable boogeyman, and so ‘nails in the coffin’ is the wrong metaphor. Really, this just indicates that references to insiders are a best practice, and we can’t kill them. We can, however, treat those references as an indicator that the person speaking is probably not an empiricist, and discount appropriately.

CRISC – The Bottom Line (oh yeah, Happy New Year!)

No doubt my “Why I Don’t Like CRISC” blog post has created a ton of traffic and comments.  Unfortunately, I’m not a very good writer because the majority of readers miss the point.  Let me try again more succinctly:

Just because you can codify a standard or practice doesn’t mean that this practice is sane. There’s plenty of documentation around homeopathy, astrology, biorhythms, and other pseudosciences, but that doesn’t make them any more real.

In other words, just being able to reference a document for repeatability does not make the outcome of those acts real or valid. Almost everyone in that thread has focused on our industry’s ability to create documentation, not on the fundamental problems of creating a defensible method for risk expression.

This is why our standards blow.  And yes, I’m going to expand my focus beyond CRISC/Risk IT and include the 800 series from NIST (including the new releases), the ISO 27005/31000 document, and many others.  They are all very heavy on repeating the same idea that risk management is some OODA/PDCA type cycle and subsequent bureaucratic processes and very thin on the actual establishment of useful risk statements. Look, your P/D/C/A policy/procedures only need to be a few pages, and you certainly don’t need the time, expense, and hassle of certification.  Spending the time and effort to tailor a several hundred page document and get people all certifiable on the subject to fit your organizational culture is just a rabbit trail of waste.

I mean, as weird as OSSTMM is – at least Pete has done a really good job of trying to provide metrics and derivative values of meaning that are repeatable.

Lessons from HHS Breach Data

PHIPrivacy asks “do the HHS breach reports offer any surprises?

It’s now been a full year since the new breach reporting requirements went into effect for HIPAA-covered entities. Although I’ve regularly updated this blog with new incidents revealed on HHS’s web site, it might be useful to look at some statistics for the first year’s worth of reports.

I’ll add that the HHS web site “Breaches Affecting 500 or More Individuals,” offers data about 181 breaches in CSV and XML formats.

But Dissent asks what we can learn. Two things strike me immediately. First, 181 breaches, no one out of business. Perhaps not a surprise, but many people seem to need reminders since the bad meme had been around so long. Second, and also in the bad meme category, let’s look at insiders. There were 10 incidents, (6% of all incidents involving 500 or more people). They impacted 50,491 people (1% of all people.) We sometimes hear that incidents involving insiders are the most damaging or impactful. The unauthorized access incidents (which is a separate category from hacking) had a lower mean number impacted than hacking, improper disposal, loss, theft, business associates, laptops, desktop computers, portable electronic devices or network servers. In fact, the only categories which impacted fewer people were “theft, unauthorized access” and “paper records.” Now, it’s true that unauthorized access is not the same word as insiders. In fact, unauthorized access likely includes both insiders and access control failures (the “spreadsheet on a website” pattern). It’s also true that there were quite damaging incidents that involved fewer than 500 people (the “peeking” pattern). It’s even possible that those were the worst incidents. But we have no evidence for that claim. Still.

But the biggest, most important lesson is that Dissent can ask not “what did HHS learn from this,” but rather, “What can we learn from this?”