Models and Accuracy (Threat Modeling Thursday)

For Threat Model Thursday, I want to look at models and modeling in a tremendously high-stakes space: COVID models. There are a lot of them. They disagree. Their accuracy is subject to a wide variety of interventions. (For example, few disease models forecast a politicized response to the disease, or a massively inconsistent response within an area where people can travel freely.) Policy makers need to make decisions about life and death, and they must assess model quality. There’s an interesting paper in Science, “Harnessing multiple models for outbreak management,” and a more accessible writeup.

I am often asked to judge threat models. People like to ask me questions that have at their heart, ‘is this system model right’ or ‘did we find the right threats.’ Sometimes they’ll ask ‘did we approach this right?’ Sadly, there is rarely a quick answer to that question, but one of the things I’ve learned is that the answer, but not the logic, follows Betteridge’s law of headlines. The answer is always no. The reason people are asking me to judge their models is that they are uncomfortable with them, and they’d like help figuring out why.

That’s not 100% true. Sometimes they’re really proud of the model, and want to show off. That’s usually accompanied by a relieved story of ‘we almost did this..’ Those are great stories. I love them. I love hearing what people emphasize as they tell the stories – there’s gold there in how organizations change and mature. Rarely are the models in these stories perfect, or even great. They are good enough to expose a choice, an impact, or something else, and good enough to drive change.

Photo by Chris Leipelt.