Secure By Design roundup - October 2025
Phil Venables is releasing a masterclass; new guidance from SAFECode, a new paper from JPMorganChase on their tools, how Facebook uses “waves”, a new AI shared responsibility model and more!
Phil Venables, CSO for Goldman Sachs and then Google cloud has kicked off a series, Security Leadership Master Class. Even if you’re not a CISO, understanding the leadership principles he lays out is helpful to you.
Threat Modeling
- SAFECode and The Center for Internet Security have released a Secure by Design: A Developer’s Guide to Building Safer Software.
- Pat Opet of JP Morgan Chase announced a paper on their threat modeling approach (“tradecraft”). You can read that linkedin post, jump to the corporate press release or go directly to the paper.
- A new attack “Battering RAM” lead to a dispute. Dan Goodin reports in Intel and AMD trusted enclaves, a foundation for network security, fall to physical attacks, and the subhead reads “The chipmakers say physical attacks aren't in the threat model. Many users didn't get the memo.” This begs the question: Why aren’t Intel and AMD publishing their threat models? If they did, then more users would have gotten the memo.
- The telecom sector seems unable to threat model or implement authentication, so anyone can be a base station. This leads not only to “Stingrays” (a generic term for cell-site simulators) but also spam: Wired has a story: Cybercriminals Have a Weird New Way to Target You With Scam Texts.
- Luiz Vieira wrote a good framing of what threat modeling should be on Linkedin.
Appsec
- Allan Reyes has a longish article, Keeping Secrets Out of Logs, which is quite good, and has nice easter eggs.
- Facebook describes how they use monthly “waves” of activity to help teams engage with their privacy work in a blog post, Federation Platform and Privacy Waves. Key concept: “Tasks are sent in Privacy Waves, which are batches of privacy-related work distributed at a predefined, predictable cadence.”
- Also on the subject of scaling, Ryan Hurst has an article Compliance at the Speed of Code. It starts out a little obvious to set the scene, but then gets quite thought provoking. I can see starting to reject stories that don’t contain at least a line like “no security implications.”
- From CVE Entries to Verifiable Exploits: An Automated Multi-Agent Framework for Reproducing CVEs (Arxiv) is interesting both for they can produce exploits for half of the small subset of CVEs where their tools can set up an environment, and for the complexity of the LLM setup to deliver those results.
AI
- Mike Privette has released a new version of his AI Security Shared Responsibility Model.
- Benchmarking is hard. See two Arxiv papers, The Illusion of Readiness: Stress Testing Large Frontier Models on Multimodal Medical Benchmarks, (“Leading systems often guess correctly even when key inputs like images are removed, flip answers under trivial prompt changes, and fabricate convincing yet flawed reasoning. These aren't glitches; they expose how today's benchmarks reward test-taking tricks over medical understanding.”) Medical Large Language Model Benchmarks Should Prioritize Construct Validity talks (obviously) about medical benchmarks, but the same ought to apply in security. (Both via Jed Brown.)
- Despite the clickbaity headline, There isn’t an AI bubble—there are three argues that there are asset, infrastructure and hype bubbles, and that will have non-obvious business impacts, including eventual firesales in infrastructure.
Regulation
- There are no regulatory updates because the United States of America is unable to fund its ongoing operations and shut down. 2025 United States federal government shutdown (Wikipedia)
- Despite that, the FCC deems it essential to reconsider a set of security actions. The letter from Wiley Rein referenced in crucial footnotes is here.
Shostack + Associates News
- Adam has a new paper, with Loren Kohnfelder, “Publish Your Threat Models! The Benefits Far Outweigh the Dangers which crystallizes our thinking on the topic. (Healthsec 2025, in lovely Waikiki.)
- We launched a new course at OWASP Appsec Global DC: Threat Modeling Intensive with AI. How can we use LLMs to help us threat model effectively, and how can we use them to help scale? As we get ready to go, the customer-facing elements include 97 new slides, 6 new exercises, one new exercise template, and more laptops than we’ve ever used. As someone once asked, “What can go wrong?”
- Adam keynoted the main AppSec Global event, encouraging everyone to Stop Trying to Manage Risk!.