Shostack + Friends Blog

 

The Universal Cloud TM -- Threat Model Thursday

A new universal threat model - what can we learn from it? An AI drawn map of the world

Recently, Rich Mogull of Securosis and Chris Farris of PrimeHarbor released what they call “The Universal Cloud Threat Model.” They say:

The Universal Cloud Threat Model applies to all organizations which operate in the public cloud, regardless of industry and which cloud provider(s) in which they operate on. The UCTM was designed as a cloud-centric update to traditional threat modeling. Standard threat models such as STRIDE are excellent, but do not account for the different operating models of cloud computing. The UCTM was developed to address three primary gaps in existing models...
It’s a 28 page model, freely downloadable, and licensed under CC-BY-NC-ND-3. As always with Threat Model Thursdays, my goal is to respectfully look at interesting work and see what we can learn from it.

Overall, this is a really useful model. When we have well-known problems, there’s no point to reasoning from first principles. There may well be expansion of authority attacks on databases beyond SQL injection, but does it matter if you’re vulnerable to SQL injection? They express this as “In our research and experience, the vast majority of cloud attacks fall first into the untargeted/undifferentiated category, even for highly desirable targets, and defenders who focus first on these vectors are more resilient.”

Their model is “Threat Actors have Objectives against Targets using Attack Vectors which are observed by defenders as Attack Sequences.” This is useful in that it makes their threats mechanically parsable, and perhaps less useful if that leads to either pedantic phrasing or conceals detail. However, the inclusion of threat actors and objectives can occlude important detail.

  • Their list of threat actors is interesting, and it doesn’t mention the cloud operator except possibly via “insider threats.” Incidents like the UniSuper one are an important. Even so, you shouldn’t focus on it. Similarly, the motivations are a not bad list, and excludes mistakes, errors or slips. It also excludes a Vice President yelling at an individual contributor. Of course, that’s just my impression, but the opacity of why emergency procedures were invoked... When we create a list of ‘most common objectives,’ we create a situation where an attacker can do something outside that list. For example, connecting to localhost may not be on our list but turns out to be a stepping stone. By putting a list of objectives in place, we create a disadvantage when the objective isn’t so glamorous. Maybe that’s a good thing, and gets us focused, maybe it’s not.
  • Going on from there, there’s an exceptionally nice list of common attack vectors. Stepping away from the UCTM for a moment, we lack good, consistently gathered data on incident root causes. We have lists like this one, which line up with ‘the sort of clients the authors work with.’ Is that generalizable? Probably? That’s the reason that I’m working on cybersecurity public health.
  • They suggest prioritizing attack vectors as a matter of discoverability, exploitability and impact. I believe each of these is hard to measure with either precision or accuracy. But more importantly, it leaves out work to fix. Some things are really easy to fix, others very hard.
  • The next section is attack sequences, and I have a love/hate relationship to this part. Starting with the first one, “Threat Actor Copies/Alters a Public Data Resource.” I’m honestly confused here. If this is public, how does the attacker sell it? In the next one, why don’t unpatched vulns lead to cryptomining? Oh — cryptomining shows up twice! There are model abstraction variances, for example on page 22, read-only and read-write are listed, where on 25 there's ‘cloud storage.’ I bring this up not to be nitpicky, but because as a reader I have to ask, ‘is this different from what I read a few pages back? Why is the label different?’ This is a normal effect of writing: being consistent can feel pedantic, inconsistency has a cost for the reader.
  • Some small issues with the figures. There’s no figure number or other reference number. The text is very small. A different set of diagram conventions could help. They could use different shapes, or cross lines so that each outcomes shows once. Different shapes can be confusing, but if each is also labeled, its less challenging than different shapes in a system model diagram.
  • In the text lists below, the “Objectives: Financial gain via X” could be a lot shorter, in many cases a parenthetical like “(Spam/phishing)” at the end of the vector bullet. That’d save 2 lines per attack, and make scanning easier. Similarly, although it’s a small thing, every action in every diagram starts with “Attacker,” and removing that makes scanning easier.
  • Normally, I avoid reviewing nameless things, and UCTM comes pretty close to missing that bar. Each part of UCTM is a descriptor, not a name. We don’t talk about Guido’s programming language, we talk about Python, and more, we talk about Python 2 vs 3. I’d like to talk about the “rainy lake” threat model or whatnot. (After conversation with Rich, this is not as clear as I’d hoped, the phrase “give it a name that’s trademarkable” was helpful.) On the bright sude, UCTM has a version.

When I step back to think about it, this is a model of inherent threats. Part of me wants to see them jump to a “mitigations model,” where, having shown their threat model, we get directly to ‘what are we going to do about it.’ Maybe that’s a subset of CSA, CSF, or other controls. Maybe that’s a bad idea. But the UCTM can be taken as an interesting version one of a meta-model, and it’ll be interesting to watch it evolve.

Image by Midjourney. I fed the entire post into it to see what would happen.