Category: Patching

Microsoft pre-warning of patches

[Microsoft] will publish a general summary
of planned security bulletin releases three business days before each
regularly scheduled monthly bulletin release…

The advance notifications will include the number of bulletins that
might be released, the anticipated severity ratings, and the products
that might be affected.

This has been available to select customers for a while. Its good to see it expanded; they’ve found the notifications to be quite useful.

My initial thought was that this was bad. That clever hackers would find the issues, and write exploits for them sooner. While this may be the case, I expect that the clever hackers are getting the early notices now, and they’re not helping.

We know that the software we use is full of bugs. All of it. The only surprising part of the claim “There are critical vulnerabilities in (Internet Explorer, Safari or Firefox) that will be fixed next Tuesday” is that a bug has been discovered that will be fixed on Tuesday. To put it another way, I can tell you, without any fear of being wrong: There is a critical bug in Internet Explorer. That doesn’t help you find it.

Now, there’s a broader risk, which is that Microsoft’s delaying patches for a while allows problems to be exploited while they test their patches. We haven’t yet seen evidence of that, nor has anyone (as far as I know) really looked for such evidence. One way to do so would be to take advantage of cheap storage, and record a few gb of traffic from a network. Run snort over the captures. Run snort again 6 months later. See if new attacks, not known at the time of the capture, now have signatures for them. See how prevalent such things are. There are doubtless other ways, like a nice big bug bounty for exploit code for a bug before the patch has come out. We lack good data, and that frustrates me.

(From Microsoft, via Susan Bradley posting to the Patchmanagement list.)

OMB, TSA asking for it.

Ed Hasbrouck points out that

Public comments are open through Monday, 25 October 2004, on the Secure Flight airline passenger identification, selection, and surveillance system proposed by the USA Transportation Security Administration (TSA) and its Office of National Risk Assessment (ONRA).

My draft comments are here, and I’d love feedback before sending them.

[Update: Fixed link to comments.]

Patches & EULAs

Security patches should not have licenses. There’s no fair re-negotiation under threat. If I bought your software, and am using it, then you find a bug, you should not be allowed to put new terms on the software in order for me to be safe using it.

Imagine a hotel which lost a master key to a known criminal, and then sent the manager door to door, asking for supplemental money to get a new lock.

I can’t imagine that such contract terms are legal, so why do vendors bother with them?

At least Microsoft, after marketing to you about the security bugs they fix, is listing the files that a patch changes.

Bush, Socrates, and Information Security

“Wherin links between a number of disparate ideas are put forth for the amusement of our readers”

Orcinus talks about one of Bush’s answers to a question in last night’s debate.* (I thought Bush did surprisingly well, but think that Kerry still came out slightly ahead. Both, depressingly, still want to spend my money on their own pet projects, and fail to offer bold responses to the challenges we face.)

The questioner — seemingly a middle-class homemaker — simply wanted to know if Bush could admit to having made mistakes. After all, most of us ordinary humans make them too, but we also tend to be acutely aware of them. That Bush was incapable of giving her a straight answer was incredibly revealing.

Socrates used to go around in search of a wise man, questioning everyone he met. Bush’s answer (read the whole answer at Orciunus) was “historians will look back and say.” That’s not the answer of a man who looks back and evaluates what he’s done. Looking back and evaluating your choices is a key part of making better decisions in the future. The ability and willingness to doubt and question as you’re making a decision is a good one. You need to know when to stop and make a decision, but you also need to know how and when to analyze.

On the other hand, I’ve gone through media training, and that’s one of those questions that nearly requires either a dodge or a facile answer. Clinton might have been able to word-smith his way through it.

Information security has a number of long-standing camps. One is the mathematicians who want to prove theorems about systems, and thus state their security. Another is the empiricists, who try to set up experiments which can invalidate a system’s security claims. It should come as no shock that I think the work of the empiricists is more useful. Cryptography is a sometimes exception to this, where it would be nice to have some proofs, but we can’t even show P=NP, so, its a ways away.

I don’t think that the math camp has stepped back enough to self-analyze. The empiricist camp does so regularly. I’ll use as examples two papers by Eric Rescorla: “Is Finding Security Holes a Good Idea?” and “Time to Patch, Revisited.” The latter is an examination of work (not yet online) that I did in collaboration with the team at Immunix, including Crispin Cowan and Steve Beattie. Eric points up that we needed more data to arrive at the conclusions we did, which is fair enough. (The main point of the paper, which is that patch management is a risk management game, stands, and I stand by it.) The Finding Holes paper questions one of the underlying claims of the full disclosure camp: That finding and fixing holes will eventually result in more secure software.

*UPDATE: I wrote this mostly on Saturday, but was searching for links to Rescorla’s papers.
Update 2: Rescorla kindly put his TTP work online, now linked above.