Threat modeling as a dial, not a switch
Thinking of threat modeling with a knob helps you get more out of it.
Lately, a lot of people have been asking me about what “triggers” threat modeling. The question confused me: you think about threats as part of any design decision! There are lots and lots of design decisions, ranging from tiny to enormous. For each, we ought to be asking what are their pros and cons? That includes what can go wrong, and more: is is scalable? Performant? Maintainable? Secure? Private? How deep we go depends on the details of the feature.
Security departments will sometimes encode the questions of what can go wrong into questionnaires. Some of these are short, in the range of 3-5 questions. Others are ... longer. Sometimes much longer. These are designed to protect development teams from heavyweight threat modeling processes, often ones that involve consultation with a security team, and can take weeks or longer. (This dynamic seems to inform this story about Facebook using AI for privacy risks.)
There was a great talk at Blackhat '23 by Mrityunjay Gautam and Pavan Kolachoor, AI Assisted Decision Making of Security Review Needs for New Features , in which their LLM found lots of features that should have been reviewed, but were not. I’m not in a position to comment on what went on with those engineers whose features didn’t get tagged for review, and I’ll assume best intentions: they really didn’t feel their feature needed a review. Maybe the questions asked didn’t touch on the issues that triggered the LLM. Questionnaire design is always a tradeoff of length versus completeness, and so review triggers get left off. (This ... prompts thinking about LLM driven review, but that’s a separate post.) Or maybe they felt the feature had some danger, but wasn’t worth a “full review.”
But all of this is dependent on a bad metaphor, which is thinking that threat modeling is a switch, or a thing which has a single definition or procedure. It’s much better to think of threat modeling as a volume dial: You should regularly adjust it to fit current needs.
This is easier when you have good separation of policy and procedure, and people who are able to make good decisions about the dial. If your policy requires a STRIDE-per-element approach, your threat modeling will be slower than if you require asking “what can go wrong” (in an appropriate way). Most companies understand some software is more critical, and as it gets more critical, awareness and concern over “what can go wrong” increases, as does quality assurance.
So if you have people who are avoiding threat modeling, consider the reasons. An inflexible process can be a key contributor, and lately I’ve been seeing ... a switch flip because of this metaphor.
If you’d like help diagnosing or improving your threat modeling process, why not drop us a line, or join one of our open trainings at Blackhat? (Aug 2-3 or Aug 4-5)