The State of Appsec in 20242024 is bringing lots of AI, and Liability, too
At the start of 2024, appsec is moving through two major inflection points: liability and AI. The first has two facets: how do we secure AI systems, and how do we use AI in appsec? The second major inflection is driven by governments re-arranging liability from software operators to software makers. And as I think about where we are in 2024, I’m optimistic and hopeful because of a third change, much more nascent, that lays groundwork for assessing and improving both of those transformations. Let me start with the AI changes, because while they have lots of crucial details, they’re conceptually very simple.
Appsec will be a crucial part of safer AI deployments
Your executives want to deploy AI. AI, especially generative LLMs, replace finicky, unpredictable, expensive people with finicky, unpredictable, expensive software that doesn’t complain when told to come into the office. But importantly, you can’t just “deploy” an LLM. You have to build it into business processes, and if you don’t think about what can go wrong and what you’re going to do about it, don’t worry, other people will help you figure those things out while making off with your crown jewels. So you’re going to need to threat model, and you’re going to need to determine what defenses make sense around your new AI systems. The problems are not new, but they are increasingly urgent. Bolting security onto an LLM is going to be exceptionally, embarrassingly ineffective, and so we’ll see growth in appsec. We’ll need to learn a lot to make this work, but we cannot do it without appsec.
AI will improve appsec
Many of the challenges that appsec brings challenges which are solvable with specialized knowledge, and LLMs, used well, can make that specialized knowledge easier to find. Of course, it will remain tricky to evaluate the information (not knowledge) that gets returned, and doing so will get more important. Nevertheless, we’ll see organizations learning to use LLMs to assess features, code and designs, and to suggest improvements. There will be embarrassing mis-steps along the way that will make it easy to focus on how AI is hurting appsec. Many people will focus on how AI is writing insecure code, but that will likely get sorted out by the creation of code-specific LLMs with better training data, by better prompts for getting secure code, and better output filtering/checking. It probably won’t be in 2024, but I expect we’ll see a time where machine generated code is safer than human generated code.
Liability is here, and shifting
Today, we tend to blame breached companies for their woes, and there’s often a lot of reasons for that. Underinvestment in patching or configuring systems is rampant! But there’s another reality, which is that a lot of the software which needs patching and configuration is made by companies who are making billions of dollars in annual profits, rather than investing more in making that software more robust or easier to configure. CISA has been talking a lot about Ralph Nader’s “Unsafe at Any Speed,” and how many cars were unsafe until government stepped up to measure and regulate more strongly.
Liability for software makers isn’t just coming, it’s here. It’s here in the form of the European CRA, and it’s here in the US for anyone who makes medical devices, sells to the federal government, moves money, makes software that kids use, rents videos, operates transportation systems or other critical infrastructure, works with location data... who needs to do what is very, very complicated, and it’s going to get worse.
It’s easy to think that “software makers” are just a few big tech companies in Silicon Valley or Seattle, but as Marc Andreessen pointed out a decade ago, every company was already a software company. Every company makes software, if only in Excel, IFTTT, Salesforce, Hubspot and the like.
It’s also easy to get confused because the regulatory train is rolling down the tracks that we can wait for it to get to the station before the effects can be felt. That would be a mistake. Many of these changes are going to take time to implement. For example, changes to “Secure by Default” (CISA Secure by Design, page 9) may entail changes to documentation or installers. Security configuration checkers are implied by the phrase “The complexity of security configuration should not be a customer problem.” Should you be building one? I don’t know your circumstances, but I would encourage everyone to start figuring it out now, rather than waiting until OMB releases their attestation requirements for selling to the US Government. Similarly, when the same guidance calls for “organizational structure and leadership,” those are going to take time. Do you want to be explaining in your SEC filings that you haven’t done those things? After a breach, do you want to be explaining to plaintiff counsel that you were going to get to it once the rules were firmed up? There’s plenty of detail, and the longer you wait, the more clearly tenuous your explanations are going to be.
Let me be frank: This is going to be a hard transition. Lawyers will increasingly try to influence software development. (Smart lawyers will make time to learn to code a little.) Leadership who can continue development at speed while managing these new challenges will be scarce. Good threat modeling to reduce rework, rather than bad threat modeling that enables paperwork, is going to be important.
The emergence of Cyber Public Health
One of the things that makes liability bad is we don’t have a quantified understanding of what’s going wrong, in the sense of root causes and contributing factors to the problems, and that makes it impossible to pass laws which effectively and narrowly target just those root causes. Instead we get broad laws which require things like security awareness training and insanely short-trigger provisions about telling regulators. We get requirements to change passwords every 90 days baked into regulations.
The last trend is one that we’ve been building towards: a science of cyber public health. CyberGreen has released a set of interesting technical reports, run a seminar series jointly with the Ostrom Workshop, and with support from Google, is organizing the world’s first workshop on cyber public health in January 2024.
The public health frame is one I’ve used as far back as Project Broad Street, which led Microsoft to fix Autorun. Focusing on population health gives us an way to identify impactful problems, and the tools of public health give us proven ways to address them.
Update: Fixed an extraneous “not” so people should start figuring it out now, not wait. Thanks, LK!
Image by Microsoft copilot: “create for me a stock art image of 2024 with the words "liability is coming" and "AI" and "Appsec" in it. bright and airy. impressionist colorism, bokeh panorama, molecular structures. aspect ratio of 8:3/Yes, I can make the first image bigger and more detailed for you. Here is the improved version of the image. I hope it meets your expectations. 😊” < LIAR