Attackers, Disclosure and Expectations
In both military or information security situations, the position of the attacker is very powerful. An attacker can choose when, where, and how to attack. Attackers are not constrained by change management committees, operational risk, or a need to make economic tradeoffs within a budget. [1] Attackers don’t need to consider other work that needs doing. The attacker can set and reset their operational tempo, from very, very slow scanning and reconnaissance to very fast penetration and subversion.
If you believe people who break into computers, these advantages add up to the attacker usually being able to break into a system that they’ve targeted. Skilled attackers will tell you that there’s no system they can’t break into. (Skilled, high budget defenders have told me the same thing: that when they hire really high quality red teams, and tell them to go all out, the red teams win.)
Most people outside the computer security world don’t know that; they’ve been told things like “We follow industry standard best practices to protect your information.” A fair number of people in the computer security industry don’t like to admit to the advantages that the attacker has. When you’re working hard to protect your employer, or customers it really stinks to know you’re offering only a partial solution, or that there are problems you know about, but can’t fix for good reasons.[2]
With a new set of rules emerging around disclosure [3], such as California’s SB 1386, these problems are being revealed to the public. (The PIPEDA blog has a great roundup of incidents.) In a bit of a stretch, the Sarbanes-Oxley act may also implicitly require such disclosure, as a security breach may indicate an inadequacy of controls under section 404. 1386 may be passed at a national level in the wake of recent incidents. Today, the market doesn’t know how to react to such things, and sharply penalizes companies for disclosures, or for new events.
Over the next few years, the stock market will start to factor in these disclosures. There will be too many breaches reported for the smart folks to not factor in disclosure. However, consumers will be slower to understand that these breaches are regular, and may well switch suppliers when they can. The Ponemon Institute study showed that as many as 80% of consumers would switch airlines after a breach of confidentiality.[4]
So, where does this leave a company trying to make good security decisions? Since this post is already too long, I’ll point to More on SSNs and Risk, which I wrote in December after the Delta Blood Bank disclosure. I’ll try to write more on this topic over the next few days.
There are some footnotes attached to the extended version.
- This is a simplification: Attackers are concerned about operational risk, that of being arrested and charged with crimes. Some are also now employed, and focused on the highest financial return on their activities.
- Good reasons to leave open vulnerabilities include usability (I don’t want to have special software installed to purchase items online), economics (the expected cost of an attack is smaller than the cost of a defense), or the unavailability of defenses (what do you do about a 600mb per second denial of service flood?)
- Here I’m using disclosure to mean that a company needs to inform the public, or members of the public, of a breach of the company’s security, not to refer to the vulnerability disclosure debate.
- The Ponemon Institute study is admittedly a tad confusing; we only see data for consumers switching airlines, where they’ve also said that they trust the airline.