The New Transparency Imperative
…in the incident last September, somewhat similar to recent problems at the Veterans Affairs Department, senior officials were informed only two days ago, officials told a congressional hearing Friday. None of the victims was notified, they said.
…
“That’s hogwash,” Rep. Joe Barton, chairman of the Energy and Commerce Committee, told Brooks. “You report directly to the secretary. You meet with him or the deputy every day. … You had a major breach of your own security and yet you didn’t inform the secretary.” (From Associated Press, “DOE computers hacked; info on 1,500 taken.”)
It used to be that security breaches were closely held secrets. Thanks to new laws, that’s no longer possible. We have some visibility into how bad the state of computer security is. We have some visibility into the consequences of those problems. For the first time, there’s evidence that I can point to to explain why I tremble with fear at phrases like “we use industry-standard practices to protect data about you.”
The new laws are not yet well understood. They’re not well understood by computer security professionals. They’re certainly not the basis for a set of case law that establishes the meaning of some key terms, like encryption. (I expect that juries will frown on using rot-13 to encrypt secrets, even if it might be within the letter of the law.) The only place where they’re understood is by the public, who expects to hear when they’re at risk.
The change in expectations will have exceptionally beneficial long term effects. We will get data that we can use to measure aspects of computer security. What a set of real attack vectors look like. (We may not learn about insiders or super-hackers.) With that data, we can focus our efforts on putting better security measures in place.
Requiring companies to own up to problems will drive them to ask their vendors for better software. They will ask the experts how to distinguish good software from bad. This may have the effects that some experts hope liability would bring about.
There will be a lot of short-term pain as we discover the shape of the new normal. The transition is well worth it.
The image is X Ray 4, by Chris Harve, on StockXpert.
“Requiring companies to own up to problems will drive them to ask their vendors for better software. They will ask the experts how to distinguish good software from bad. This may have the effects that some experts hope liability would bring about.”
If these requirements were in place in 1998-1999, do you think they would have killed NT4 and helped BSD? Or do you think that bad software with a seemingly insurmountable installed base will somehow be justified as “good enough” software?
Furthermore, for custom software, how do you propose auditing the code base to determine what is good and what isn’t? For example, for a government web application, is it enough to run AppScan (or Sandcat, WooHoo!), or should less than efficient documentation processes be required to prove diligence in integrating data risk management into the SDLC?
I expect that juries will frown on using rot-13 to encrypt secrets, even if it might be within the letter of the law.
I actually think that particular sort of scenario isn’t going to happen. It’s unlikely anyone will be caught using a weak crypto algorithm, because they wouldn’t gain anything — these days, good crypto is so readily available that it’s no harder to use a good algorithm than a bad one. What will happen, though, is some organizations will choose poor ways of handling the crypto keys. I forsee a breach in the future where someone steals a laptop, and both the encrypted data and the encryption key are on it.
David, I don’t agree that its not going to happen. I’ve talked to people who were seriously considering it for performance reasons.
I disagree that it is so easy to have strong crypto that it will be unavoidable. I agree that key management is a bigger problem. Right now, the majority of the states’ disclosure laws neither define “encrypted” nor remove safe harbor if both ciphertext and key are stolen.
State by state details at http://www.cwalsh.org/cgi-bin/blosxom.cgi/2006/05/24#breachlaws
For a sufficiently callous firm, the cost to notify could be close to zero. As long as enough customers get hit, or you have terrible records, you simply put up a web page. Depending, basically, on how stupid or incompetent you as a firm were to allow the data to leave, you still take the reputational hit (or not), so why not try to spend as little as possible on disclosure?
Along with the “new normal”, I think we may see the emergence of this as a more popular tactic.
I’d need to reread the laws more closely, but consider also an SQL injection attack, where the back-end DB is encrypted. If the result set is decrypted by the exploited web app, but the stored data were encrypted, does the duty to notify go away? The “new normal” says no. If I lost a few million CC#s to SQL injection under the described scenario, my lawyer might be arguing yes.
Interesting times.