Secure By Design roundup - July/Aug 2025
All the exciting secure by design news from the end of summer
This roundup covers July and August. Let me start with the blogosphere. Izar Tarandach and Michael Weiss have joined the resurgent blogosphere. Michael’s blogging at The Security Economist, and Izar here. I continue to think that moving away from platforms focused on engagement is a good thing and hope to call attention to more people blogging.
Threat Modeling
- Threat Modeling Connect released their State of Threat Modeling Report (2025). It’s the first-ever community-driven report, and will be covered in a webinar on Sept 4.
- A team at Amazon has released Threat Designer, a Claude-driven system to Accelerate threat modeling with generative AI.
- William Doughery, Patrick Curry and a team that’s grown beyond the INCLUDES NO DIRT team have released PROMISE TO MAP, an approach to threat modeling AI. It includes a sample system, the PROMISE TO MAP set of threats, a set of controls and a worksheet.
Appsec
- Amirali Sajadi, Kostadin Damevski, Preetha Chatterjee have released Are AI-Generated Fixes Secure? Analyzing LLM and Agent Patches on SWE-bench on arxiv.
“Our findings reveal that the standalone LLM introduces nearly 9x more new vulnerabilities than developers, with many of these exhibiting unique patterns not found in developers' code. Agentic workflows also generate a significant number of vulnerabilities, particularly when granting LLMs more autonomy, potentially increasing the likelihood of misinterpreting project context or task requirements.”
AI
- Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity is a fascinating study, showing that developers working on their own codebases were slower when using Cursor or Claude.
Learning Lessons
- Countering Chinese State-Sponsored Actors Compromise of Networks Worldwide to Feed Global Espionage System is a report from 14 countries (some with multiple agencies contributing). This is a really fascinating report, and I’d like to pick a nit over them calling a set of attackers who exploit known vulnerabilities “APTs.” The work raises serious questions about the defenders of critical infrastructure.
- The US Coast Guard Marine Board of Investigation releases report on Titan submersible. The report is absolutely scathing about many aspects of engineering, but two of the primary causal factors are directly applicable to software:
- 6) OceanGate’s failure to conduct a detailed investigations after the TITAN experienced mishaps that negatively impacted its hull and components during dives conducted prior to the incident,
- 7) OceanGate’s toxic workplace environment which used firings of senior staff members and the looming threat of being fired to dissuade employees and contractors from expressing safety concerns,
It’s easy to focus on the second part of cause 6, the “hull and components” part, and miss the importance of detailed investigations of problems, which is a widespread problem in software shops.
Regulation
When companies don’t learn lessons from incidents, eventually that leads to regulation. Two relevant stories include:
- A good overview from Reed Smith of the new EU Product Liability Directive: Implications for software, digital products, and cybersecurity. Key line: “any defect in software, including vulnerabilities or failures in digital services, may trigger liability if it leads to harm.”
- The CBC reports that Quebec car theft victims get green light for lawsuit over key fob security, and the defendants(?) seem to be Toyota, Honda, Hyundai, Nissan, Mazda, FCA, Ford, Audi, Kia, Mitsubishi, Subaru, Volkswagen and Volvo.

Games received
- Straiker released Breach, a collectible game along with foil-imprinted cards. There’s a fascinating story behind how they made it, and that’s theirs to tell.
- Horizon3.ai Breach Chain. No web page.
- SocRadar has a Know Your Enemy Threat Actor card deck, with a very light guessing mechanism.
- The EFF poker deck.
Shostack + Associates News
- I’m editing this years DEF CON Hackers’ Almanack. If you saw good stuff with policy implications, let me know using the form at defconfranklin.com
- We’re launching a new course OWASP Appsec Global DC: Threat Modeling Intensive with AI. How can we use LLMs to help us threat model effectively, and how can we use them to help scale?
- I spoke at Usenix Enigma on Risk Is Not a Hammer, and Most Hazards Aren't Nails. I hope the video is out soon.