Shostack + Friends Blog

 

Maintaining & Updating Software

[no description provided]

In the aftermath of Wannacry, there's a lot of discussion of organizations not updating their systems. There are two main reasons organizations don't update the operating systems they run: compatibility and training. Training is simpler — you have to train people about the changes to the Start Menu to move them to Windows 8, and that's expensive. (I sometimes worked with sales people when I was at Microsoft, and they could have managed this much better than they have.)

Compatability is harder. In his excellent blog post on "Who Pays?," Steve Bellovin discusses how "achieving a significant improvement in a product's security generally requires a new architecture and a lot of changed code. It's not a patch, it's a new release." There are substantial changes to the ways memory is managed and laid out between the versions, including ASLR, DEP, CFG, etc. There are many changes and seeing how they impact real programs is hard. That's part of the reason Microsoft released the Enhanced Mitigation Experiment Experience Toolkit.

This doesn't just apply to platforms, it also applies to libraries. (For example, see Jon Callas, "Apple and OpenSSL."

Even when compatibility is generally very high, someone needs to test the code to see if it works, and that costs money. It costs a lot more money if you don't have test code, test documentation (YAGNI!) or if, umm, your test code has dependencies on libraries that don't work on the new platform...It is unlikely that re-certifying on a new platform is less than weeks of work, and for larger products, it could easily extend to person years of work, to maintain software that's already been sold. The costs are non-trivial, which brings me back to Steve Bellovin's post:

There are, then, four basic choices. We can demand that vendors pay, even many years after the software has shipped. We can set up some sort of insurance system, whether run by the government or by the private sector. We can pay out of general revenues. If none of those work, we'll pay, as a society, for security failures.

This is a fair summary, and I want to add two points.

First, it remains fashionable to bash Microsoft for all the world's security woes, there is a far bigger problem, that of open source, which is usually released without any maintenance plan. (My friends at the Core Infrastructure Initiative are working on this problem.)

  • Code is speech. The United States rarely imposes liabilities on people for speaking, and it seems downright perverse to do so more if they let others use their words.
  • There may not be an organization, or the author of the code may have explicitly disclaimed that they're responsible. If there is, and we as a society suddenly impose unexpected costs on them, that might inhibit future funding of open source. (As an example, the author of Postfix was paid by IBM for a while. Does IBM have responsibility for Postfix, now that he's left and gone to Google?) How does the "releasing code" calculus change if you're required to maintain it for ever?
  • The Open Source Definition prohibits discrimination against fields of endeavor, and requires licenses be technology neutral. So it seems hard to release an open source library and forbid the use of code in long-lived consumer goods.
  • What if Bob makes a change to Alice's code, and introduces a security bug in a subtle way? What if Alice didn't document that the code was managing a security issue? Does she need to fix it?

Second, the costs to society will not be evenly distributed: they're going to fall on sectors with less software acumen, and places where products are repaired more than they're replaced, which tend to be the poorer places and countries.

[Update: Ross Anderson blogs about a new paper that he wrote with Éireann Leverett and Richard Clayton. The paper is more focused on the regulatory challenge that maintaining and updating software provokes than the economics.]

Photo by Pawel Kadysz.