Shostack + Friends Blog Archive

 

Following up "Liability for Bugs"

Chris just wrote a long article on “Liability for bugs is part of the solution.” It starts “Recently, Howard Schmidt suggested that coders be held personally liable for damage caused by bugs in code they write.”

Chris talks about market failures, but I’d like to take a different direction and talk about organizational failures. Security flaws in products code come from defects in design and implementation, and are allowed to ship because they are not caught the testing process (or because it’s too late to fix them.) There are also operational flaws, made worse if the product doesn’t ship in a secure state, or if it lacks a security manual.

Notice how little of that has to do with ‘bad code,’ and how much of it has to do with security as part of the development lifecycle. Microsoft understands this. Not only have they trained all their developers (which I think is still unique in the industry), but they have trained all(?) their program managers and executive level training is in the works.

Dropping liability onto ‘coders,’ for ‘code’ they write ignores the reality that software production is an economic process involving a great many non-coders who influence the output.

If you’re going to put liability around bad products, you need to put it onto those who can effect change in the products.

PS: I did a series last year on the value of signaling as a means to address information asymmetry in “Security Signaling,” “Signalling by Counting Low Hanging Fruit,” and “Ratty Signals.”

3 comments on "Following up "Liability for Bugs""

  • Anonymous says:

    Security flaws in products code from defects in design and implementation
    “come from” ?

  • Adam says:

    Oops! Thanks! (Corrected in the text)

  • David Brodbeck says:

    Not to mention that sometimes responsibility is unclear. For example, if a bug in a library routine causes a problem, who’s to blame? What if the library routine was passed unusual or out-of-range input, and reacted in an unexpected way? Does it depend on whether the expected range is documented? Very hazy.
    Regardless, I think what most people want isn’t liability on the part of individual coders. I think what they wish for is some way to make large software *companies* somehow responsible for the quality of their software. There aren’t many other industries where a company gets to disclaim all liability and make it stick — other than the gun industry, anyway.

Comments are closed.