Measuring the ROI of threat modeling: moving from activity to impact
Kymberlee Price, Shostack + Associates
Shostack + Associates COO Kymberlee Price shares her experience measuring the impact of secure design engineering practices on security outcomes
When it comes to cybersecurity, one of the most persistent challenges for organizations is proving the value of preventative work. Counting the number of threat models (TMs) delivered is a metric of volume, but it doesn't tell us if the money and time we have spent are actually making the organization safer.
While it is easy to measure the cost of Bad Things after they occur, it is extremely difficult to prove that you’ve prevented a Bad Thing that may or may not have happened. While Google is producing excellent research into the impact of defensive engineering investments on reducing exploitation, the signal-to-noise ratio remains incredibly small. We are looking at a few hundred 0-days a year compared to millions of lines of code, making direct attribution nearly impossible. When assessing “What Could Go Wrong” so we can prevent negative outcomes from occurring, we have to answer the question:
Did you successfully prevent harm or was there no actual threat of harm to begin with?
Fortunately there are ways AppSec teams can measure the impact of their threat modeling and secure development programs and not just activity:
- Near Miss Tracking: Instead of just counting models, measure how many threats were identified, their severity, and how many were remediated before release. Over time, this data can be analyzed to uncover trends for analysis - are fewer threats being identified per threat model? Is there data that would attribute that to engineering teams improving their security design capabilities (true negative), or a change in the skills or capacity of the people performing the threat models (false negative)?
- Risk Acceptance Tracking: How many findings were added to the risk register or formally accepted?
- Cost Avoidance Analysis: Based on severity ratings of the issues identified in your threat models, you can estimate the potential cost of the issues found. What would these vulnerabilities have cost the company in developer fix time or bug bounty awards if they had been identified post-release, requiring an incident response operation?
- Retrospective Analysis: Look at the 12-18 month window after a feature or product launches, and compare to other features or products across the company. Do components that underwent threat modeling have significantly fewer findings in the bug bounty program than those that didn't?
- Incident Correlation: Look at your incident artifacts. If 95% of your critical incidents occur in components that were never threat modeled, while your threat-modeled components rarely reach the critical incident response level, you have a powerful correlation that demonstrates value.
By shifting our focus from how much work we are doing to quantifying identification and reduction in risk and cost savings, we can better communicate the true ROI of a robust threat modeling program.
Kymberlee Price has spent years building effective Secure Development and AppSec programs that balance engineering user experience, scalability, and instrumentation for measuring the impact of security efforts. If this blog post resonated and you’d like more help with your secure design and threat modeling efforts, check out the Shostack + Associates Accelerator Program – a leadership academy for security professionals that are accountable for an organization's secure-by-design engineering program. This advisory program helps companies actually change how they build secure software.
Image by midjourney: “A woman at a large chalkboard filled with mathematical calculations; precise, ordered, the work of someone thinking rigorously. Marketing photo, high production.”