Threat Model Thursday: Synopsys
[no description provided]There's an increasing — and valuable — trend to publish sample threat models. These might be level sets for customers: "we care about these things." They might be reassurance for customers: "we care about these things." They might be marketing, they might serve some other purpose. All are fine motives, and whatever the motive, publishing them gives us an opportunity to look and compare the myriad ways models are created, recorded and used. And so I'm kicking off a series I'm calling "threat modeling Thursdays" to do exactly that.
Up front, let me be clear that I'm looking at these to learn and to illustrate. It's a dangerous trap to think that "the way to threat model is." There's more than one way to do it, as the Perl mavens say. Nothing here is intended to say "this is better or worse." Rather, I want to say things like "if you're a consultant, starting with scope is more important than when you're a developer."
So today's model, kicking off the series, comes to us from Synopsys, in a blog post titled "The 5 pillars of a successful threat model." And again, what's there is great, and what's there is very grounded in their consulting practice.
Thus, step 1 includes "define the scope and depth. Once a reasonable scope is determined with stakeholders, it needs to be broken down in terms of individual development teams..." Well, sure! That's one way to do it. If your threat models are going to be executed by consultants, then it's essential. And if your threat models are going to be done as an integral part of development, scoping is often implicit. But it's a fine way to start answering the question of "what are we working on?"
Step 2 is "Gain an understanding of what is being threat modeled." This is also aligned with my question 1, "what are we working on."
The diagram is great, and I initially wanted the internet trust boundary to be more pronounced, but leaving it the same as the other boundaries is a nice way to express "threats come from everywhere."
The other thing I want to say about the diagram is that it looks like a nice consulting deliverable. "We analyzed the system, discovered these components, and if there's stuff we missed, you should flag it." And again, that's a reasonable choice. In fact, any other choice would be unreasonable from consultants. And there are other models. For example, a much less formal whiteboard model might be a reasonable way for a team starting to threat model to document and align around an understanding of "what we're building." The diagrams Synopsys present take more time than the less formal ones. They also act as better, more formal records. There are scenarios where those more formal records are important. For example, if you expect to have to justify your choices to a regulator, a photo of a whiteboard does not "convey seriousness."
Their step 3 is to model the attack possibilities. Their approach here is a crisp version of the "asset/entry point" that Frank Swiderski and Window Snyder present in their book. "Is there any path where a threat agent can reach an asset without going through a control?"
They draw in assets, threat agents and controls here, and while I'm not a advocate of including them in diagrams (it makes for a lot of complexity), using two diagrams lets you study the system, then look at a more in depth version, which works nicely. Also, their definitions of threat agents is pretty interesting, for example, "unauthorized internal user." It says nothing of their motivation or capabilities, just their starting position and privileges. Compare and contrast that with a threat persona like "Sean “Keech” Purcell – Defacer." (Keech is one of the personas created by Aucsmith, Dixon, and Martin-Emerson.)
Synopsys's step 3, along with their step 4, "interpret the threat model," answer the question "what can go wrong?" Here I do want to mildly critique their use of the word "the." There are at least four models in play in the threat modeling activity (System, assets, agents, and controls are all modeled.) There's strength in thinking of threat modeling as a collection of activities. Calling a particular something 'the threat model' is both very common and needlessly restrictive.
Their step 5 is to "create a traceability matrix to record missing or weak controls." This is a fine input to the question that the readers of that matrix will ask, "what are we going to do about it?" Which happens to be my question 3. They have a somewhat complex analytic frame of a threat agent targets an asset via an attack over a surface... Also interesting in the traceability matrix is the presentation of "user credentials" as an attack goal. I treat those as 'stepping stones,' rather than goals. Also, in their discussion of the traceability matrix, we see handoffs: "it takes time and repetition to become proficient at [threat modeling]," and "With experience, you’ll be able to develop a simplified traceability matrix." These are very important points — how we threat model is not simply a function of our job function, it's also a function of experience, and the ways in which we work through issues changes as we gain experience. There's another trap in thinking the ways that work for an experienced expert will work for a novice, and the support tools that someone new to threat modeling may use will hinder the expert.
Lastly, they have no explicit analog to my step 4, "did we do a good job?" I believe that has nothing to do with different interests in quality, but rather that the threat model deliverable with their logo on it will go through stages of document preparation, review and critique, and so that quality check is an implicit one in their worlds.
To close, threat modeling shares the property, common in security, that secrecy makes it harder to learn. I have a small list of threat models to look at, and if you know of some that we can look at together, I would love to hear that, or other feedback you might have on what we can learn from this model.