Following up "Liability for Bugs"
Chris just wrote a long article on “Liability for bugs is part of the solution.” It starts “Recently, Howard Schmidt suggested that coders be held personally liable for damage caused by bugs in code they write.”
Chris talks about market failures, but I’d like to take a different direction and talk about organizational failures. Security flaws in products code come from defects in design and implementation, and are allowed to ship because they are not caught the testing process (or because it’s too late to fix them.) There are also operational flaws, made worse if the product doesn’t ship in a secure state, or if it lacks a security manual.
Notice how little of that has to do with ‘bad code,’ and how much of it has to do with security as part of the development lifecycle. Microsoft understands this. Not only have they trained all their developers (which I think is still unique in the industry), but they have trained all(?) their program managers and executive level training is in the works.
Dropping liability onto ‘coders,’ for ‘code’ they write ignores the reality that software production is an economic process involving a great many non-coders who influence the output.
If you’re going to put liability around bad products, you need to put it onto those who can effect change in the products.
PS: I did a series last year on the value of signaling as a means to address information asymmetry in “Security Signaling,” “Signalling by Counting Low Hanging Fruit,” and “Ratty Signals.”