Shostack + Friends Blog

 

Threat Modeling AI Systems: Finding the Line Between Application Security and AI Security

Michael Novack, Shostack + Associates

Announcing a new course from the Shostack + Associates team. Banner for new Threat Modeling AI Systems Course from Shostack + Associates

AI security currently exists in a sort of Schrödinger’s Box where people simultaneously under-complicate and over-complicate it. When we talk with organizations about securing their AI systems, we consistently see security practitioners fall into two camps:

  1. The first group believes: “There’s nothing new here.” In this view, AI security is simply traditional security applied to a new type of application. The solution becomes a checklist of threats and controls layered on top of existing processes.
  2. The second group believes the opposite. They argue that AI requires entirely new frameworks, methodologies, and mental models that are fundamentally different from everything that came before.

The reality is somewhere in the middle, which is how we approach the problem in our new Threat Modeling AI Systems course.

The Problem: AI Security Is Both New and Familiar

When a new technology appears, the natural instinct is to ask: What are the threats? What controls do we need? What boxes must be checked? That instinct is understandable. Security teams are already managing cloud security, application security, identity systems, supply chain risk, compliance requirements, and a long list of other responsibilities. Trying to keep up with the pace of AI development on top of that can feel impossible. A checklist offers a shortcut. But AI is evolving too quickly for static lists of threats and controls to remain useful.

AI systems have introduced several new security considerations. Machine learning models, training data pipelines, probabilistic outputs, and emergent model behavior create hazards that traditional application security does not need to address. At the same time, not all of AI security is new; much of AI security involves applying existing security practices to data science workflows. The real challenge is identifying the line where traditional security ends and AI-specific risks begin so the right approach can be applied.

For some practitioners, AI security can feel overwhelming because its location at the intersection of security, software engineering, and data science. Organizations have long had to secure data pipelines, analytical models, and sensitive datasets. What has changed and is adding to the complexity is the scale and accessibility of these systems. Modern AI has dramatically increased the number of models being built, the number of developers interacting with them, and the number of decisions influenced by them. The risks are more visible now, but the underlying principles are not new. A goal of this course is to make that complexity approachable. We break concepts down into manageable pieces and reinforce them through threat modeling exercises and interactive scenarios.

Building the Right Foundation

Shoshanna Cox and I designed the Threat Modeling AI Systems course to establish the data science foundation that security practitioners need in order to reason about AI systems. Before discussing threats, we walk through:

  • How data pipelines work
  • How models are trained
  • How model behavior differs from traditional software
  • How AI systems are deployed and integrated into applications.

Once those systems are understood, many of the security implications become clear.

After establishing that foundation, we focus on the areas where AI security is genuinely different from traditional application security. However, the goal is not to memorize a list of threats. Lists become outdated quickly. Instead, we explore why these threats exist and what properties of AI systems create them. Through exercises and threat modeling scenarios, practitioners learn how to identify these risks themselves. This prepares them to reason about new threats and new controls as AI evolves.

Solution: A Threat Modeling Foundation That Lasts

AI evolves quickly. New models, attack techniques, and frameworks appear constantly. Trying to chase every new development is not sustainable. Instead, the Threat Modeling AI Systems course focuses on building a durable mental model grounded in data science workflows and security principles. With that foundation, practitioners can understand how new threats and controls build on existing ideas. In a field moving as fast as AI, that kind of understanding allows security practitioners to stay effective for years, not just months.

Michael Novack headshot Shoshana Cox headshot

Michael Novack and Shoshana Cox created Shostack + Associates’ newest course, Threat Modeling AI Systems, an in-depth technical course for security professionals already familiar with threat modeling that focuses on the unique challenges that AI brings to modern applications. They will be delivering this course in May 2026 in Washington DC, you can learn more and register here.