Last week at RSA, I was talking to some folks who have reasons to deeply understand a big and publicly discussed breach. I asked them why we didn’t know more about the breach, given that they’d been fairly publicly named and shamed. The story seems to be that after the initial (legal-department-driven) clampdown on talking, they started briefing various organizations under NDA about what had happened. I want to share what I learned without naming and shaming the organization in question, because I think that’s counter-productive.
Along the way, one of the servers used by their attackers went away. Claims were made that the attackers learned the IP address was under investigation, and that caused the server to go away. (There’s either leaps of logic that would make Baryshnikov proud, or some intel. My suspicion leans towards the former, since the latter would make the server going away less interesting.) When the server went away, elements of the US government freaked out and told the organization to stop talking.
How’s that secrecy working out for us?
I think the answer is poorly. If we can’t share information about breaches, scaling the breaches is nearly free for the attackers. Something as simple as a domain or IP is shared in carefully controlled phone calls, and then people get upset and forbid additional information sharing.
This is apparently the reality of the “information sharing” programs that are at the heart of America’s cyber-security strategies. A failure of imagination combines with a fear of being Manning’d to result in the most trivial of trickles of information being shut down.
Again I ask, how’s that secrecy working out for us?
Well, we get RSA sessions like “Stress and Burnout in the InfoSec Community.” We get things like the CISO of a very large company, who told me “I almost want to slap Anonymous, just so they hack us and get it over with.” (Actually, what he said was more colorful, and somewhat identifying of his employer, so the quote is anonymized.)
The why is because we (writ broadly) prevent ourselves from learning. We intentionally block feedback loops from developing.
Now, it may be that there was real intelligence gold in that server. Maybe we’d have evidence that the Chinese government is funding attacks on the American government. But we have that. Everyone believes that. Is there evidence so important, or defensive knowledge so valuable that it’s worth preventing information sharing?
Obviously not. Because once gathered, it can’t be shared.
Let me add an analogy. In deciding how to use the Ultra information from the Enigma breaks, Churchill focused on ensuring that there was an alternate means to get the information. Those truths were so valuable that they were surrounded by a bodyguard of lies, and sometimes that makes sense. But if we treat all information as Ultra, then our commanders can’t use it to do their jobs, and it does us no good to gather.
So I’ll leave you with one last question: how’s that secrecy working out for us?