Appsec Roundup - June 2024
The most important stories around threat modeling, appsec and secure by design for June, 2024.
Threat Modeling
- The City of London police report that a homemade mobile antenna was used to send thousands of smishing messages I’ve been skeptical of phone system security, but this is both important if you’re trusting the phone system, as an example of an evolving threat, and really funny.
- Teaching Software Engineers to Threat Model: We Did It, and So Can You, Jamie Dicken, RSA.
- Redefining Threat Modeling: Security Team Goes on Vacation, Jeevan Singh, RSA
- Microsoft Security Servicing Criteria for Windows, is surprisingly useful for thinking about trust boundaries. It provokes the question: Is there an element of trust that’s larger than security? Would we be better calling them security boundaries as MS does?
Appsec
- A collaboration between the ACM, IEEE-CS and AAAI (A. Advancement of AI) have released their Computer Science Curricula 2023. “CS2023 provides a comprehensive guide outlining the knowledge and competencies students should attain for degrees in computer science and related disciplines at the undergraduate level.” These will guide the content of computer science curricula for the next decade, because the accrediting organizations will treat them as authoritative. The security sections seem solid at first glance, but I’m worried that computer security did not play a large enough role. (Announcement.)
AI
- Crossing the streams between Appsec and AI, Claudia Negri-Ribalta and co-authors have published A systematic literature review on the impact of AI models on the security of code generation. It’s an excellent review of academic literature based on a search done in November 2023, and focuses on the studies published by then, and so the most recent improvements in LLM generated code security are not reflected. I’m skeptical that newer LLMs change the fundamental answers, any more than they solve hallucination.
- A group of leading AI researchers have released a letter about a Right to Warn, advocating that staff can warn the public about risks from their employer’s products, without being sued for disparagement or retaliated against. It’s specific and thought provoking, and perhaps we should have a broader conversation about it, including not just AI, but security and privacy.
- Not security specific, but What We Learned from a Year of Building with LLMs is excellent.
- Sebastian Raschka publishes a monthly roundup of LLM research. It’s longer than I’d like, but more selective than I get from other sources.
- CBC reports: Winnipeg man caught in scam after AI told him fake Facebook customer support number was legitimate. I don’t want to victim shame, but everyone knows Facebook doesn’t have support phone lines. More seriously, the victim, a former member of “the legislature,” by which I think they mean provincial not national, was no idiot. He took the time to try to check that the number he found was legit. It should not be so hard.
Shostack + Associates updates
- I’ll be teaching at Blackhat, August 3-4 and August 5-6.
- Also, the magic is back... Magic Security Dust that is. (Our training magic never left.) Visit Agile Stationery to order, the website will be updated soon to reflect the new stock.
Image by Midjourney: “a photograph of a robot, sitting in a library, working on a jigsaw puzzle --ar 8:3”