Secure By Design roundup - Dec/Jan 2026
The normalization of deviance, exciting threat modeling news, and a question of do regulatory threats change ‘the threat model’ as much as GPS attacks? Not yet.
This month leads off with Wunderwuzzi’s excellent article The Normalization of Deviance in AI. The context, which closely relates to both Diane Vaughn’s “normalization of deviance” and Charles Perrow’s concept of “normal accidents” to contextualize how we’re becoming accepting of AI-related defenses that would be shocking in any other context. He says “we can observe the drift of normalization occurring in real-time.” It’s a great point.
Threat Modeling
- ThreatModeler and IriusRisk are now a combined entity. See their sites or my Congratulations to ThreatModeler and IriusRisk!
- The team behind the State of Threat Modeling Report is gathering data for next year’s report.
- Signups for the 2026 Threat Modeling Hackathon are open. The hackathon runs Feb 2-27.
- Somewhat tangential to threat modeling, Murat Buffalo blogged some TLA+ modeling tips. TLA+ is a formal logic system used to find and prevent timing issues in concurrent and distributed systems. The list is perhaps of interest to threat modelers.
Appsec
- Apple has released guidance for Using alternative browser engines in Japan. (Japanese law now requires this.) Most of the page is about security and privacy requirements, and while a few are browser specific, most seem like Apple’s view of what modern app development for security-critical apps should be.
- Stars are now a bad metric for selecting open source projects. See Six Million (Suspected) Fake Stars in GitHub: A Growing Spiral of Popularity Contests, Spams, and Malware. (Via Andrew Nesbitt’s How to Ruin All of Package Management.)
AI
- OpenAI admits prompt injection is here to stay as enterprises lag on defenses is good perspective from Louis Columbus, prompted by an OpenAI post, Continuously hardening ChatGPT Atlas against prompt injection attacks.
- A thought provoking article by Steve Newman: Discarding the Shaft-and-Belt Model of Software Development. No real tie to security, but fascinating.
Regulation
- ETSI has released a set of Interim drafts of the CRA standards for 13 “high risk” categories. It’s not clear from that drop when comments are due or where the higher level standards are. Jumping into the 91(!) pages of the Cybersecurity requirements for Operating Systems says “The product’s firmware and/or software shall be implemented in a memory-safe language. Any use of unsafe memory features shall be documented to explain why they are necessary and do not present a security risk.” (5.2.3.4). I wasn’t aware that there are operating systems that are implemented in memory safe languages, but there are, and include MOSA and Embassy.
- Anthony Rutkowski has an extended essay on the dangers of the CRA, EU CRA: Regulatory Extremism and Exceptionalism.
- Patrick McKenzie has a long article on One Regulation E, Two Very Different Regimes which is both interesting in its own right, a good caution about Zelle, and a useful deep dive on repudiation. (My far less nuanced approach to Zelle is like my approach from debit cards: Avoid. Life’s simpler.)
- Maryam Shoraka has a fascinating article in BankInfosec: Dark Patterns, Children's Data and
Corporate Fiduciary Risk.
For CISOs and data governance leaders, this fundamentally changes the threat model. The risk isn't only "someone might break in and steal the data." It now includes "we designed the system in a way that makes it almost inevitable that a regulator will decide we've misused that data." That's a very different kind of incident.
- In entirely unrelated news, FTC Finalizes Order Settling Allegations that GM and OnStar Collected and Sold Geolocation Data Without Consumers’ Informed Consent. OK, maybe it’s related.
- There’s a good article U.S. Federal Agencies Are Stepping Up for the Quantum Security Transition. US agencies are often concerned with what are called “collect now, decrypt later” threats. Most organizations don’t have adversaries who are going to do this, and most organizations work faster than the US Federal government. (See also Peter Gutmann’s Why Quantum Cryptanalysis is Bollocks (video or slides pdf).)
Shostack + Associates News
- We released our first-ever threat advisory: Threat Advisory: GPS Attacks [SA-26-01]. Why? Well, that is a frequently asked question, and there’s an FAQ at the end of the advisory.
Image by midjourney: ”a photograph of a robot, sitting in a library, working on a jigsaw puzzle. The robot is spotlighted by light streaming in through a small window, through which you can it's snowing.” I appreciate how this one is holding up the jigsaw and it’s snowing inside, both demonstrating AI is bad at concepts.