Threat Model Thursday: NIST’s Code Verification Standard

Earlier this week, NIST released a Recommended Minimum Standard for Vendor or Developer Verification of Code. I want to talk about the technical standard overall, the threat modeling component, and the what the standard means now and in the future. To summarize: new requirements are coming to a project near you, and getting ready now is a good idea.

The standard

The standard is a Recommended Minimum Standard for Vendor or Developer Verification of Code. It was produced in response to Executive Order 14028, Improving the Nation’s Cybersecurity. It covers 11 techniques in 6 classes:

  • Threat modeling (🎉 🥂)
  • Automated testing
  • Static analysis (code scanning, hardcoded secrets)
  • Dynamic analysis (use the built in protections, black box, structural tests, regressions, fuzzing, web app scanning)
  • Check included software
  • Fix bugs

This is a really good list. I want to emphasize that. I really like several of the framings, including especially “check included software” being more than “run a software component analysis tool to check for CVEs.” I would have liked to see explicit mention that gcc’s -Wall option still does not in fact turn on all warnings.

Nominally, this is a standard about software verification, and they’re considering vendors who are not the original developers verifying fitness. (FAQ #3).

Threat modeling within the standard

I am glad to see threat modeling included in the standard. The task NIST was given was to craft a testing standard, and threat modeling is an unusual thing to include there. They do address that, and I want to expand on what they’ve said:

Threat modeling should be done multiple times during development, especially when developing new capabilities, to capture new threats and improve modeling.

As we discussed in the Threat Modeling Manifesto, there are many ways to get value. Threat modeling can be a great test planning technique, and if that’s all you’re using it for, you’ll find extreme value in ensuring you consider what you’re working on as a whole. It’s also useful in verifying fitness for purpose as a developer selects and commits to software developed elsewhere. (Ideally, the developer will start to provide such threat models, or consumers will start to share their work. I look forward to either and both.)

However, I do disagree that improving modeling should be a goal in and of itself. Modeling has to be a task in service of a goal, and good enough, thoughtfully considered, is good enough.

The future

Currently, this document exists in an odd state. It is titled “Recommended Minimum Standards,” but it is not a standard. Question 4 of the FAQ clarifies: NIST sets the standards, other parts of the government set procurement requirements. So if you sell to the Federal government, expect to see these requirements in your procurement questions soon, and that will trickle across the market.

These standards are also of interest to anyone who writes words like “We take industry standard steps to protect your security,” say, in a privacy policy. Much like the FTC’s Start with Security, if you’re ignoring these steps, it may well come back to haunt you. All of these techniques can be implemented easily, at least for a start. How deep you need to go for each is dependent on the unique circumstances of your business. For threat modeling, the Manifesto and my Worlds Fastest Threat Modeling Videos series are both good places to start.

The technical work involved in each of these can be pretty small. However, change is always hard at scale, your developers are busy, and figuring out what tools to use, what your requirements are, how you’re going to track those requirements, et cetera, will all take time and energy. If you start now, you’ll minimize disruption and have an easier time of it.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.