Shostack + Friends Blog

 

The NVD Crisis

The NVD is in crisis, and so is patch management. It’s time to modernize. A Victorian factory with managers spending time on a complex risk management practice.

The National Vulnerability Database (the NVD) appears to be in some sort of hiatus, no longer assigning CVSS information to CVEs. They’ve posted a note:

NIST is currently working to establish a consortium to address challenges in the NVD program and develop improved tools and methods. You will temporarily see delays in analysis efforts during this transition. We apologize for the inconvenience and ask for your patience as we work to improve the NVD program.
If you want to understand what’s happening, hackread says Josh Bressers first drew attention to it, and Josh has a podcast on the episode. Me, I wonder if this has to do with the 12% budget reductions at NIST. Beyond the why, many people are quite concerned, because they’ve been using CVSS scores to reduce the amount of patching work they do, generally under a label like “risk management.” (I prefer to think of it as workload management when you’re letting someone else make “risk” decisions for you. And that’s fine. We do this outsourcing in all parts of life, work and personal.)

The patching pipeline

That said, the reliance on NVD is magnified because the overall patch processing pipeline is expensive, and NVD provided free data that let us ignore potential dangers and justify those choices. There are many reasons that patching is expensive. Over twenty years ago, Steve Beattie, Crispan Cowan (and more) of us published Timing the Application of Security Patches for Optimal Uptime, where we compared the risk of destabilization to the risk of compromise and the data showed that a balancing act of about a week to ten days was optimal then. (Very recently, Crispin mentioned to me that trackd.com/ seems to be commercializing the idea.) Understanding the chances that a patch will destabilize something is one way to minimize the expected cost of patching. And you want to do that because you have a lot of work in deploying each patch.

Another approach is to reduce the cost of detecting and responding to that destabilization. And it turns out that a combination of cloud and SRE practice has revolutionized this as a side effect of mantras including “code that builds code”, “crops not pets”, and “serverless.” (Amongst other patterns that decouple code from the machines they run on. This world is in stark contrast to when we wrote our paper. “Bare metal” servers were the norm and upstart practices like virtualization on VMware were emerging.

Let’s say you have code that builds code. Let’s say it can handle a “new patch” event, build a new version of a system that incorporates that patch, smoke test it and roll it out. Let’s say the code will notice and respond to functional degradation, all without human intervention. That world exists for many cloud-native companies. In that world, the work to decide if that patch is needed is deferred to after you observe degradation, or outsourced to the folks who provide you with ‘serverless’ capacity.

A path forward

The NVD crisis is unexpected, and community efforts to replace it are welcome and important. But if — today — you’re experiencing a crisis in your patch management program you need a fix. Waiting on NIST or the community for band aids will frustrate management.

The fix is (ahem) a crash program to get to 95% of your servers built and deployed automatically. It will be difficult. There are presumably reasons you haven’t gotten there yet. And while the other side is not a magical land of perfect software, it’s way, way better than being in a place where you’re ignoring most patches because patching is too hard.

[update: The folks at Vulncheck have a program to add data to CVEs in their NVD enrichment products, and Francesco Cipollone has a roundup of alternatives. I encourage you to see all of these as stopgaps.]

Image by Midjourney: a photograph with a foreground and a background. in the foreground is a set of orphans, frantically working in a Victorian factory, patching, oiling, repairing and fixing complex machinery. The background includes an overseers office, on a second floor, overseeing the factory machinery. In the overseers office a profusion of risk management charts and many people are crammed into the office, arguing. Morning sun fills the scene. the machinery is brass, complicated and full of gears and steam. hyperrealistic. 4k