Towards an Economic Analysis of Disclosure
In comments on a my post yesterday, “I Am So A Dinosaur“, Ian asks “Has anyone modelled in economics terms why disclosure is better than the alternate(s) ?” I believe that the answer is no, and so will give it a whack. The costs I see associated with a vulnerability discovery and disclosure, in chronological order are:
- The cost borne by a researcher who finds a vulnerability. This may be the time of a student, or it could be a fiscal cost borne by a company like NGS or eEye. Laws such as DMCA drive these costs up greatly. There is a subset of this cost, which is that good disclosure costs the reporter more. Good disclosure here includes testing on a variety of platforms, figuring out workarounds, and documenting the attack thoroughly.
- The set of costs incurred by the software maintainers. Every time a vulnerability is discovered, someone needs to evaluate it, and decide if it is worth the expense of writing, testing, and distributing a patch.
- Costs distributed amongst a great many users include learning about the vulnerability, and perhaps the associated patch, deciding if it matters to them, testing it, rating the urgency of the patch versus the business risks associated with a change to operational system. [If the users don’t make that investment, or make poor decisions, there’s a cost of being broken into, and recovery from the problem. (Thanks, Eric!)] Highly skilled end users may want to test a vulnerability in their environment. Full disclosure helps this testing. Good disclosure from the researcher also helps hold down costs here. Since there are lots of users of most such software, savings multiply greatly.
- Costs distributed amongst a smaller group of security software authors include understanding the vulnerability, building or getting exploit code, and adding functionality to their products to “handle” the vulnerability, either scanning for it, or detecting the attack signature. Where these vendors have to write their own exploit code, they will be slower to get their tool to customers. These costs are sometimes lower for vendors of closed source toolsets who can encode the information in a binary, and thus get it under NDA.
- Costs to one or more attackers to learn about the vulnerability; decide if they want to use it in an attack; code or improve the code for the attack; deploy the attack.
- Costs to academic researchers are separated here because academics are less time sensitive than security vendors. I can invent and test a tool to block buffer overflows with a 30 day old exploit as well as with completely fresh exploits. Academic researchers need high quality exploit code, but they don’t need it quickly.
I think that all responsible disclosure policies attempt to balance these costs. Some attackers don’t disclose at all; they invest in finding and using exploits, and hope that they have a long shelf life. (I started to say malicious attackers, but both government researchers and criminals fail to disclose.)
Ideally, we’d drive up attacker costs while holding down all of end-user, security vendor, and academic costs. (One of my issues with the OIS guidelines is that they give too little to the academic world. They could easily have said ‘responsible disclosure ends 90 days after a patch release with the release of exploit code and test cases.’)
So, Ian, I hope you’re happy–you’ve distracted me from the stock market question.
[Update: Reader Chris Walsh points to a paper, “Economic Analysis of the Market for Software Vulnerability Disclosure,” which takes these ideas and does the next step of economic analysis, as well as a presentation that some of the authors gave at Econ & Infosec.]