Application and AI roundup - Jan 2024
A busy month+ in appsec, AI, and regulation.Application Security
- The Terrapin Attack on SSH is fascinating, and on Hacker News, Colm Macc says “Formal verification is still the most comprehensive and exhaustive form of testing that we have, but gaps can really bite you. It is like relying Pythagoras' theorem for a triangle, but then it turns out that your triangles are on the surface of a sphere, which isn't a plane. The math itself is right, but the environment was different all along.” This is an important point, and I think it's important to consider the costs of creating and working with formal verification, which perhaps distracted from the missing transcript handshakes. (Tom Ptacek’s comment on the thread explains those transcripts succinctly.)
- The Debian project has issued a statement on the EU CRA, including a mention of the “need to perform risk assessments and produce technical documentation.” I think they meant to object to that on the basis that free software is a gift to society, but I don’t know how they develop ‘an integrated system of high-quality materials’ without those as inputs. Anyway, it’s not clear because despite the quote in the preamble they don’t explicitly return to the idea.
- Bert Hubert continues to do outstanding, thoughtful work around what the EU’s CRA means. His latest EU CRA: What does it mean for open source? includes a response to the Debian statement.
- Sean Baxter writes about evolving C++ in backwards compatible ways in Circle.
AI
- Brendan Bycroft has an amazing LLM Visualization.
- In New GitHub Copilot Research Finds 'Downward Pressure on Code Quality' David Ramel reports on a regwalled report about code quality and LLMs. According to GitClear it results in lots of code churn and bad code that violated DRY coding.
- The Berryville Institute of Machine Learning released An Architectural Risk Analysis of Large Language Models: Applied Machine Learning Security (Regwalled), a refinement of their ML risk analysis.
Threat Modeling
A lot of people are exploring how we can use LLMs in threat modeling. I think of these explorations as a continuum from “use ChatGPT” to “train a custom model,” with many points in between, such as “LangChain that!” The other crucial spectrum is how much work the threat modeler or threat modeling team needs to do to prepare the the LLM.
Regulation
My State of Appsec in 2024 started with the rise of liability. As I think about it more, and as I read Jim Dempsey’s article, I think I want to talk about “legal consequences” to be more clear.
- Let’s start with that paper: Jim Dempsey has a new paper Standards for Software Liability: Focus on the Product for Liability, Focus on the Process for Safe Harbor issued as the start of a Security By Design series by Lawfare. It starts with a concise and excellent survey of the legal frameworks (warranty, negligence, liability, certification), and the challenges that our responses like SDLs are about processes not software quality outcomes and certainly not about safety of operation.
- John Voorhees wrote about Understanding Apple’s Response to the DMA, (the EU’s Digital Markets Act) and how it’s changing Apple’s App store.
- In an editorial at DataBreaches.net, “If entities continue to obfuscate and lie, it’s time to mandate more transparency in breach disclosures,” the inexhaustible blogger points out that “there are entities who rush to assure people that they have no evidence that data has been misused even though it’s early days, and even though they know that the data is in the hands of criminals who wouldn’t hesitate to misuse it.” They continue “DataBreaches believes that incomplete and misleading breach disclosures constitute an unfair practice as defined in the FTC Act.” They were building on a fairly outrageous story in which staff decided to ignore open meeting laws, ignore breach disclosure laws, and not discuss details that were already public because people could connect dots that could negatively affect their employer. I think such actions are going to result in harsher and harsher penalties. We as a community need to reset norms, soon, by condemning these choices and making clear that they’re unacceptable. You can’t go wrong remembering: it’s not the crime, it’s the coverup.
Shostack + Associates updates
We’ve made a set of changes to our courses website to continue simplifying how we communicate. There’s now a list of our most popular courses across the top of courses.shostack.org and we created course pages for them:
There’s still complexity, because our clients routinely want different things to meet their needs, and our /training page continues to evolve to help you make sense of it.Image by Midjourney: “A robot that looks shocked and outraged by what it’s reading. The background is a library lined with books. The image is cinematic, dramatic, professional photography, studio lighting, studio background, advertising photography, intricate details, hyper-detailed, 8K UHD --ar 8:3 --v 6.0”