Proof Of Concept Code, Boon or Bane

Microsoft has come out swinging against researchers who publish code:

Microsoft is concerned that the publishing of proof-of-concept code within hours of the security updates being made available has put customers at increased risk.

A common practice among responsible researchers is to wait a reasonable period of time before publishing such code. This generally accepted industry practice gives individual users and enterprise businesses time to test, download, and deploy security updates. Microsoft is disappointed computer users were not given a reasonable opportunity to safeguard their computing environments.

First off, it is accurate to say that you need code to execute many attacks. In the absence of code, customers are safer. However, Microsoft’s assertions are incorrect, in that this practice is not “generally accepted,” and if it were, it is unclear that it would increase security. There are a number of very important uses for proof of concept code. They include:

  • Testing of hardening techniques If a company uses hardening software, such as that offered by PIVX, Immunix, or Sana, then it faces a decision of “Do we need to install this patch?” Someone needs to test the defense against the attack, and because that involves running the attack, requires code.
  • Writing IDS rules If a company uses an IDS, someone needs to write a rule for the IDS to detect the new attack. Testing such a rule requires code. Given the short cycle times in which vendors try to ship updates, many customers may wish to test their IDS. Doing so, again, requires the availability of code.
  • Writing vulnerability scanner rules If a company uses a non-credentialed vulnerability scanner, that is, one that looks for evidence that an attack can work, rather than evidence of a patch being installed, then the vulnerability scanner authors may well need access to code. In both the scanner and IDS cases, there are open source products which are widely used.
  • Academic research Academics who want to create and test new defensive software need access to a zoo of attacks and targets in order to test. Unlike the hardening, IDS, and scanner cases, the academic case does not justify immediate release. However, since it is often overlooked, I try to bring it up.

Yes, code being out there increases the number of people who will use it to attack. To the best of my knowledge, no one has quantified how much this happens in a defensible experiment. (For example, record all traffic on a network for a week. Run your IDS over it, and record the output. Run your IDS again on the same input six months later, and see how many attacks are newly detected.)

Previous posts include Towards an Economic Analysis of Disclosure, Database Flaws More Risky Than Discussed, and Swire on Disclosure. And thanks to “Mr. X” for suggesting I discuss this.

[Full disclosure: I’d used an XXX to indicate I had things to fix before posting, and forgot to take it off the subject. D’oh!]

3 Replies to “Proof Of Concept Code, Boon or Bane”

  1. Full disclosure: for and against

    How to address Internet security in an open source world is a simmering topic. Frank Hecker has documented his view of the Mozilla Full Disclosure debate that led to their current security policy. I haven’t read it yet, but will….

  2. Comment on the Swire paper … not having read the paper as yet, but that’s a fascinating idea to apply the EMH (efficient markets hypothesis) to security. I would agree that the open source community takes the view that all information about the technology is public. That is, the source code.
    But this doesn’t extend to the exploit. That’s more akin to insider information. Now, in EMH, the question is, how does the insider information leak and then become public? As it happens, there are similarities between insider leakage and exploit leakage.
    But, there is one crucial difference: Once insider information leaks, it quickly gets factored in as public information. Within days or hours… But, exploit information while spread quickly, does not get factored in quickly. The vulnerability then has a very long tail whereby we wait for all the users out there to patch. There is no such effect in EMH, so I’d be careful in employing the lessons from there.
    Still, a great example of cross-discipline ideas.

  3. You make valid points as always, but I think that, as security practitioners, we tend to neglect the base rate when making these kinds of value judgments. Certainly, there are a lot of people (and vendors!) with IDS systems to test, vulnerability scanners to update, and so on. But the huge majority of businesses do not benefit from these activities, either because they simply don’t own those security technologies, or because they end up feeling the effects of the worm (or attacker) which uses the exploit before they have a chance to patch their systems.
    Also, I may be misreading your post (and being pedantic to boot!), but MSFT have not “come out swinging” against researchers who “publish code”. Rather, they have suggested that researchers who publish exploit code on the same day that the patches are released are not doing most businesses a favor.
    I think a factor here is that there seems to be a certain “macho factor” associated with having your code be the most well-known spl0it for a particular vulnerability. (route did very well off teardrop.c, if I remember correctly – just as one example). To accomplish this though, you have to publish your code before other people (even if you originally found the vulnerability), and it seems like the minimum delta which lets you retain the veneer of professionalism these days is “the day the patch is released + zero”… hardly altruistic.