Cures versus Treatment

A relevant tale of medical survival over at The Reality-Based Community:

Three years ago a 39-year-old American man arrived at the haematology clinic of Berlin’s sprawling Charité hospital. (The venerable Charité, one of the great names in the history of medicine, used to be in East Berlin, but it’s now the brand for the merged university hospitals of the whole city.) He had both leukaemia and HIV; you wouldn’t have given much for his chances. Now he has neither. How?

The great cancer researcher and medical writer Lewis Thomas wrote this in 1983 (The Youngest Science, endnote to page 175). The context is his stint as an adviser on health policy in Lyndon Johnson’s White House in 1967:

We recognised three levels of medical technology:

(1) genuine high technology, exemplified by Salk and Sabin poliomyelitis vaccines, which simply eliminated a major disease at very low cost by providing protection against the three strains of virus known to exist;

(2) “halfway” technology, applied to the management of disease when the underlying mechanism is not understood and when medicine is obliged to do whatever it can to shore things up and postpone incapacitation and death, at whatever cost, usually very high indeed, illustrated by open-heart surgery, coronary artery bypass, and the replacement of damaged organs by transplanting new ones…

and (3) nontechnology, the kind of things doctors do when there is nothing at all to be done, as in the case of patients with advanced cancer and senile dementia.We suggested that the rising cost of health care was resulting from efforts to treat diseases of the halfway or nontechnology class, and recommended that basic research on these ailments be sponsored by NIH.

Thomas’ analysis still looks spot on to me. But his optimism has so far not proved justified: the billions poured into medical research ever since have led to many improved treatments but disappointingly few cures. The ideal state for Big Pharma is represented by the state of the art on diabetes and HIV: costly lifelong treatments. For most lethal conditions, we don’t have even that.

Information Security also seems to be stuck in the “halfway” technology mode.  We treat the symptoms by patching and deploying security products to prolong survival, but as of yet, there is no cure.

In most organizations, it’s even worse.  Lack of basic knowledge and awareness, lack of funding and/or misplaced risk tolerance produce more nontechnology, such as Business Acceptance of Risk Forms and Security Dashboards where vanity metrics provide CYA.

Even when we know what the Right Things to reduce a risk are, whether turning off the TV, eating right and getting some exercise or removing admin rights and keeping crapware off the machines, we as a society and as companies all-to-rarely seem to have the will to make it happen.

To twist a line from Dean Wormer, “Fat, dumb and pwned is no way to go through life, son.”

I'm OK When The System Works – Even If It Is A False Alarm


UPDATE:  @lbhuston gives us the dirty low down here:


This was a test of the emergency broadcast system.  This was only a test, had this been a real change in the Threat Landscape…..

You may have read in various media outlets about a little incident that happened yesterday concerning the mailing of a CD full of malware to a credit union.

Before we go any further the following caveats totally apply:  I’m pretty close with several of the actors in this “incident”.  In fact, had this been a few years ago, there’s a good chance that I would have been the guy responsible for building the forgery and burning the CD.  So my biases are apparent. And I have purposefully not talked directly with any of the parties (MicroSolved, the credit union, NCUA, SANS, ThreatPost, etc…) before sharing with you my impression and what I take away from yesterday.

So yesterday, there was an alarm raised about a “new” form of attack, purportedly against “banks” or even “the financial infrastructure” if you believed at the time what you saw on everything from national media websites to Twitter.  What has been revealed so far to have really happened was this:

A credit union received a mailed a CD and letter that looked like it was from the NCUA (the gov’t body in charge of CU regulation and governance) claiming to be training materials to be viewed on a PC.  But the credit union saw this as a forgery, and escalated the matter.   Somehow, this attack then turned into multiple attacks on “banks” by the time it hit “big” media.  An alarm went out, and basically by early afternoon, any credit union security admin who could fog a mirror knew that there might be something focused at them.

Except that it was really just part of a contracted, valid penetration test by the security firm MicroSolved.  So really, it was a false alarm.

I would just like to state that I think:


Real quickly, let’s get this out of the way.  For there to be a false alarm, several things must have failed yesterday.  Having worked for MicroSolved, I can tell you that the paperwork we developed there for scoping is pretty durn good.  When I worked there we set bounds, we described who needed to know and who didn’t, gave expectations as to attack type, general time frames to expect it, and so forth.  The scoping and execution process were always phenomenal (pats self on back).  But as an outsider looking at it now:

There May Have Been A Problem With The Penetration Test Scope.

This could have come from one of two sources, MicroSolved, or the CU. MicroSolved could have stepped out of bounds with scoping, or somehow unwittingly created an exception to a tight scoping process. Alternately, the CU themselves, as they were going through the scoping process, might have left out a key player on their side who is part of the fraud reporting process. I say that because:

There May Have Been A Problem With The Credit Union’s Internal Processes.

For the mailing to get to the NCUA as an actual incident, it would mean that either the credit union had poor fraud reporting processes, or someone at the CU probably didn’t follow procedure and reported to the NCUA out of process.

There May Have Been A Problem With The NCUA Alarm Process.

We don’t know what happened to cause a Pentest to be reported as an actual attack, but somewhere once the ball was handed to the NCUA, there should have probably been a verification process in place (I say this having talked last night – well, laughed is more like it- with ex-NCUA InfoSec friends).  I’m guessing that there may have been a failure here, as well.

There May Have Been A Problem With The General Reporting Process.

By the time it hit SANS, SCMagazine, ThreatPost, Slashdot, The Washington Post, etc. The incident grew from one credit union to pretty much the imminent collapse of the financial infrastructure of western civilization. Again, verification and fact checking.  How it got form one CU to multiple or even across the stream into banks is not known.

With That Out Of The Way…

But if we look at what happened, the time frame in which it all happened, we can see a lot of success:

  • MicroSolved did the right thing in executing a feasible, clever attack.
  • The Credit Union did the right thing in recognizing the attack and reporting it (even if out of process – believe me, as a veteran of Credit Union SE attacks – they could have not caught the attack or even just thrown the material away and not reported it).
  • The NCUA did the right thing and got the word out.
  • The Press/Media/Alerting System did the right thing and raised the alarm.
  • Even we, the Security Professionals via phone to friends and via Twitter, did the right thing as a group and put the notice out.

So rather than playing a cynic and saying the system failed because a false alarm got out, I think we can say:

We Did A Pretty Good Job*

Now of course, the asterisks in both positive statements above should suggest to you my wise reader that I know that repeated false alarms are a bad thing.  And there are certainly lessons learned here.   But pretty much, the system of alarm worked.  We did all right.  And we did much better than one alternative, not sharing information about a perceived critical change in the threat landscape.

Bottom line, we shared information – and that’s pretty durn NewSchool if you ask me.

Visualization Friday – Back From Hiatus

Hey all, sorry it’s been so long since I put up some eye candy.  Today’s posts come from the usual sources (flowing data and other various information design blogs) but I also wanted to point you to a new source of cool:

So without futher adieu, your Visualization Friday Posts (some pertinent to the display of information security metrics and knowledge, some just downright fun & cool).


From :


Ok, so not a visualization, but information expression architect (my made up title) Stephen Few shares some thoughts on visualization and real world examples of impact:

Seriously, if you’re going to develop and present dashboards or powerpoints about security metrics to any audience but *especially* decision makers, I’d get into Stephen Few.


Did you know about the Schengen Wall?    Here’s a great map-based visualization about it found off of



The Total Eclipse of the Heart Flowchart.

Perfecter than Perfect

So I’m having a conversation with a friend about caller ID blocking. And it occurs to me that my old phone with AT&T, before Cingular bought them, had this nifty feature, “show my caller-ID to people in my phone book.”

Unfortunately, my current phone doesn’t have that, because Steve Jobs has declared that “Apple’s goal is to provide our customers with the best possible user experience. We have been able to do this by designing the hardware and software in our products to work together seamlessly.” Setting aside Michael Arrington’s excellent deconstruction, there’s a little feature, easy to implement if you have access to the call setup function (dial with a prepended *67). But we don’t have the ability to do that. Because that would be better than the best possible experience, and obviously, that’s not possible.

What Are People Willing to Pay for Privacy?

So I was thinking about the question of the value of privacy, and it occurred to me that there may be an interesting natural experiment we can observe, and that is national security clearances in the US. For this post, I’ll assume that security clearances work for their primary purpose, which is to keep foreign intelligence agents out of sensitive jobs. But articles like this indicate that it’s worth a $5-15,000 salary premium.

Part of the premium is getting a clearance for an employee is slow and expensive, as this Govcentral article says, “…it can take noncleared employees between six months and two years to receive a new clearance — an unacceptable time frame for many organizations that have significant contracts to deliver in the near term. In addition, the clearance process often is very expensive.”

But even with that issue, has the number of jobs requiring a clearance gone up that quickly as to create that degree of salary imbalance? At some point, the number of cleared people should catch up with the surge in government employment. At that point, the difference between a cleared and uncleared employee is down to (1) the cost of getting a clearance and (2) the market impact of having your life examined and judged by strangers.

Is that $1,000 a year for being unable to select the strangers?