Application and AI roundup - May
This month runs quite heavy on AI, but the CISA Safe by Design and Default document is going to be important for the next several years.CISA’s new guidance on safety by design and by default is another large brick in an emerging strategy, and it’s not just a US strategy. The document carries logos from the US, UK, Canada, New Zealand and Australia, the “five eyes” intelligence alliance, but also the Germans and Dutch, with multiple agencies stepping up from several of those countries. This sort of alignment is hard work, and will likely be followed by regulation and law in many of those places.
AI
- Ram Shankar Siva Kumar and Hyrum Anderson have announced Not with a Bug, But with a Sticker: Attacks on Machine Learning Systems and What To Do About Them. I enjoyed it, review to follow, but highly recommended as an overview of the field.
- Kai Greshake has an article, The Dark Side of LLMs: We Need to Rethink Large Language Models with the subtitle “We cannot deploy the current crop of LLMs safely.”
- A group led by Gadi Evron released Generative AI and ChatGPT Enterprise Risks. I contributed.
- OpenAI released a GPT-4 System Card, an extended writeup of its safety and security risks. (There are critics who say it doesn’t go deep enough, and is too focused on surface metrics. #include Jeff Goldberg meme.)
AI meets Appsec
Static analysis tool Semgrep announced a GPT-4 integration. Their first example is fascinating: the code hardcodes a password, and they say it’s safe to ignore. I think it’s not safe, the sample code should show how to get the password from a secret store API. I had a good conversation with their folks about the tradeoff, and what I take here is the threat and the need for vigilance as we think about tooling.
Application Security
- FDA Warns of Cybersecurity Vulnerabilities in Certain DNA Sequencing Devices explains that Illumina’s DNA sequencer can be accessed without a password.
- Google plans to add end-to-end encryption to Authenticator is a bit of a jaw-dropper. How did you roll out a feature that copies super-sensitive data to the cloud and not encrypt it?
In her cybersecurity roundup where I saw both of those, Violet Blue asks the same question: “How do medical research devices get made without passwords? How do Google employees stay employed at Google or anywhere on Earth after releasing a security tool to move critical security data with no end-to-end encryption?” My answer comes in two parts. First, what to look for is far more obvious with hindsight. These systems are big and complex, and security is a wierd niche, and so building security into engineering processes is hard. The second part is that historically, Google has hired really smart people and trusted them to do the right thing. They’ve been described as ‘process alergic,’ and that works better when you’re smaller.
The reason I wrote Threats is that ‘security is a wierd niche’ is less and less acceptable as a reason to be insecure. As that happens, we need to make it easy to access the knowledge that people need.
I’ll close this month with a quite unusual denial of service: a performance of the musical The Bodyguard was halted after a fan sang along.
Image by Midjourney: an AI reading a book, while being hacked cinematic, dramatic, professional photography, studio lighting, studio background, advertising photography, intricate details, hyper-detailed, ultra realistic, 8K UHD --ar 8:3 --v 5