Application and AI roundup - OctoberExciting news from the SEC, lots of AI, and lots of threat modeling.
Perhaps the largest news I saw in October was the SEC charging Solarwinds and CISO Tim Brown with fraud. The full complaint is on the upper right of the press release page. The essence of the complaint seems to be “It’s not the crime, it’s the coverup,” and in particular, telling the public things that were at odds with what Brown was saying internally. Note that a “Complaint” is the government’s case — as strongly as they can make it — and as far as I know, neither Brown nor Solarwinds has responded. Key takeaway: Make sure that your public discussion of your appsec program doesn’t mislead. And as you’ll learn in the AI section... it’s important to your career.
- Survey of Vulnerabilities in Large Language Models Revealed by Adversarial Attacks by Efran Sheygani and collaborators. A long academic survey of the state of the field. (40 pages + 14 pages of references.)
- Alex Stamos and Chris Krebs analyze the new AI Executive Order.
- Large Language Models Understand and Can be Enhanced by Emotional Stimuli, Cheng Li and collaborators. Adding “it’s important to my career” gets better results from LLMs, via Simon Willison.
- Eric Lawrence makes an excellent point about browser privacy: URL reputation checks might result in the security provider knowing what URLs are visited, but if you can’t trust the vendor of your web browser, your threat model has bigger (insurmountable) problems. He’s both right and wrong. Browser makers need privacy clearer policy, both in the sense of “what we expect to collect” and “what we allow you to configure.” That’s not easy, and it not being easy is not an excuse.
- Dan Goodin starts from the headline No, Okta, senior management, not an errant employee, caused you to get hacked and continues with “The fault, instead, lies with the security people who designed the support system that was breached.” Dan packs a lot of analysis into his article, and I think Dan’s underlying and accurate message is better threat modeling might have prevented this, or perhaps better followup on issues found by threat modeling.
- Dana Epp writes about Adversarial Thinking for Bug Hunters. I think this is an important thread, and so I want to respectfully pull on it: what makes this ‘adversarial?’
- Doctors Remove Woman’s Brain Implant Against Her Will is a fascinating story. It seems likely the the implant was not designed to stay in forever, and that the creators were concerned it would physically degrade and cause worse damage if left in. After a lot of thinking, I think that’s a reasonable concern with really tragic consequences.
- The Update Framework is an open source framework designed to enable secure updates, which is harder than it sounds, and an ideal target for open source to solve once, and solve well.
- In Running the “Reflections on Trusting Trust” Compiler, Russ Cox asked Ken Thompson for the code, and Ken gave it to him(!) Russ does a phenomenal job explaining the backdoor. Like Russ, I’m amazed at how simple it turns out to be.
- Announcing Microsoft Secure Future Initiative to advance security engineering is a memo by four of Microsoft’s security leaders, echoing the original Trustworthy Computing memo. Overall, some analysis by Tom Warren at The Verge. I’m, frankly, confused by a goal to “cut the time it takes to mitigate cloud vulnerabilities by 50 percent.” That seems far from audiacious or transformative. I’m also surprised by the heavy inclusion of Confidential Computing, which also seems tactical. But overall, strong echos of both the Trustworthy Computing memo and the response it got. Let’s see where it goes.
Image by Midjourney: “A robot reading many books::2, while being hacked. The background is a library with walls of books. The image is cinematic, dramatic, professional photography, studio lighting, studio background, advertising photography, intricate details, hyper-detailed, 8K UHD --ar 8:3 --v 5.0”