Proof Of Concept Code, Boon or Bane
Microsoft has come out swinging against researchers who publish code:
Microsoft is concerned that the publishing of proof-of-concept code within hours of the security updates being made available has put customers at increased risk.
A common practice among responsible researchers is to wait a reasonable period of time before publishing such code. This generally accepted industry practice gives individual users and enterprise businesses time to test, download, and deploy security updates. Microsoft is disappointed computer users were not given a reasonable opportunity to safeguard their computing environments.
First off, it is accurate to say that you need code to execute many attacks. In the absence of code, customers are safer. However, Microsoft’s assertions are incorrect, in that this practice is not “generally accepted,” and if it were, it is unclear that it would increase security. There are a number of very important uses for proof of concept code. They include:
- Testing of hardening techniques If a company uses hardening software, such as that offered by PIVX, Immunix, or Sana, then it faces a decision of “Do we need to install this patch?” Someone needs to test the defense against the attack, and because that involves running the attack, requires code.
- Writing IDS rules If a company uses an IDS, someone needs to write a rule for the IDS to detect the new attack. Testing such a rule requires code. Given the short cycle times in which vendors try to ship updates, many customers may wish to test their IDS. Doing so, again, requires the availability of code.
- Writing vulnerability scanner rules If a company uses a non-credentialed vulnerability scanner, that is, one that looks for evidence that an attack can work, rather than evidence of a patch being installed, then the vulnerability scanner authors may well need access to code. In both the scanner and IDS cases, there are open source products which are widely used.
- Academic research Academics who want to create and test new defensive software need access to a zoo of attacks and targets in order to test. Unlike the hardening, IDS, and scanner cases, the academic case does not justify immediate release. However, since it is often overlooked, I try to bring it up.
Yes, code being out there increases the number of people who will use it to attack. To the best of my knowledge, no one has quantified how much this happens in a defensible experiment. (For example, record all traffic on a network for a week. Run your IDS over it, and record the output. Run your IDS again on the same input six months later, and see how many attacks are newly detected.)
Previous posts include Towards an Economic Analysis of Disclosure, Database Flaws More Risky Than Discussed, and Swire on Disclosure. And thanks to “Mr. X” for suggesting I discuss this.
[Full disclosure: I’d used an XXX to indicate I had things to fix before posting, and forgot to take it off the subject. D’oh!]