Threat Modeling is Measure Twice, Cut OnceThreat Modeling is the software version of measure twice, cut once.
Anyone who’s taken a shop class knows that spending time to measure carefully saves you time and material. The mistakes you prevent mean that you don’t spend energy figuring out how to use a bit of wood that’s too short, or cut into a shape that makes it scrap.
It’s easy to think that software’s different. After all, there’s no raw materials, so what could be wasted? There are several sets of answers. The direct answers include:
- Developer time on the feature
- Developer time on dependencies
- Computer cycles (especially in AI and cloud systems, these get pricey)
Staff time is expensive. Time spent on a feature that’s getting re-worked is less efficient. When the rework flows to dependencies, the time and energy grows. (“There’s never time to do it right, there’s always time to do it over.”) There are also indirect costs, including:
- Cost of communication
- Costs of uncertainty
- Reward for working efficiently
Communication costs are simply explaining what’s happening, and what should be done now. The costs of uncertainty are both higher and less obvious. When there’s a clear plan and people’s role is clear, they tend to work harder. “Let’s get this done.” “Let’s wait. The folks at Acme always revise the spec a couple of times.”
People don’t like to admit to slowing down like that, but we learn to do it. (There’s a reason that email clients have a sending delay feature.)
Threat modeling can give us a space to measure, discuss and then develop features.
Another way to think of this is the maxim from Fred Brooks, “Plan to throw one away, you will anyway.” The one we throw away can be the whiteboard model, or it can be fully written (but insecure) software.
One of the inflection points here is learning. Do you need to develop it to get feedback? Agile methodologies bring value by developing in small units, so you can make it work and then adjust. There are times when that’s crucial. Other times the code demands things you don’t learn from, say, error handling. You need to do it, and you often need to do it to get a prototype that people can use. And if you throw away that code, there’s no learning from the code you write to handle weird DNS edge cases. (Or whatever it was.)
When rework is cheap, it makes little sense to do this. (There are plenty of articles out there on why developers shouldn't measure twice, seemingly based on the idea that creating software is inexpensive and easy. If that’s the case for you, cool. You do you.)
But if you find yourself closer to typical ratios of 30% new code, 70% maintenance and refactoring, then maybe some “measure twice, cut once” makes sense for you.
Credit: patricia serna, unsplash.