Shostack + Friends Blog Archive

 

Diagrams in Threat Modeling

When I think about how to threat model well, one of the elements that is most important is how much people need to keep in their heads, the cognitive load if you will.

In reading Charlie Stross’s blog post, “Writer, Interrupted” this paragraph really jumped out at me:

One thing that coding and writing fiction have in common is that both tasks require the participant to hold huge amounts of information in their head, in working memory. In the case of the programmer, they may be tracing a variable or function call through the context of a project distributed across many source files, and simultaneously maintaining awareness of whatever complex APIs the object of their attention is interacting with. In the case of the author, they may be holding a substantial chunk of the plot of a novel (or worse, an entire series) in their head, along with a model of the mental state of the character they’re focussing on, and a list of secondary protagonists, while attempting to ensure that the individual sentence they’re currently crafting is consistent with the rest of the body of work.

One of the reasons that I’m fond of diagrams is that they allow the threat modelers to migrate information out of their heads into a diagram, making room for thinking about threats.

Lately, I’ve been thinking a lot about threat modeling tools, including some pretty interesting tools for automated discovery of existing architecture from code. That’s pretty neat, and it dramatically cuts the cost of getting started. Reducing effort, or cost, is inherently good. Sometimes, the reduction in effort is an unalloyed good, that is, any tradeoffs are so dwarfed by benefits as to be unarguable. Sometimes, you lose things that might be worth keeping, either as a hobby like knitting or in the careful chef preparing a fine meal.

I think a lot about where drawing diagrams on a whiteboard falls. It has a cost, and that cost can be high. “Assemble a team of architect, developer, test lead, business analyst, operations and networking” reads one bit of advice. That’s a lot of people for a cross-functional meeting.

That meeting can be a great way to find disconnects in what people conceive of building. And there’s a difference between drawing a diagram and being handed a diagram. I want to draw that out a little bit and ask for your help in understanding the tradeoffs and when they might and might not be appropriate. (Gary McGraw is fond of saying that getting these people in a room and letting them argue is the most important step in “architectural risk analysis.” I think it’s tremendously valuable, and having structures, tools and methods to help them avoid ratholes and path dependency is a big win.)

So what are the advantages and disadvantages of each?

Whiteboard

  • Collaboration. Walking to the whiteboard and picking up a marker is far less intrusive than taking someone’s computer, or starting to edit a document in a shared tool.
  • Ease of use. A whiteboard is still easier than just about any other drawing tool.
  • Discovery of different perspective/belief. This is a little subtle. If I’m handed a diagram, I’m less likely to object. An objection may contain a critique of someone else’s work, it may be a conflict. As something is being drawn on a whiteboard, it seems easier to say “what about the debug interface?” (This ties back to Gary McGraw’s point.)
  • Storytelling. It is easier to tell a story standing next to a whiteboard than any tech I’ve used. A large whiteboard diagram is easy to point at. You’re not blocking the projector. You can easily edit as you’re talking.
  • Messy writing/what does that mean? We’ve all been there? Someone writes something in shorthand as a conversation is happening, and either you can’t read it or you can’t understand what was meant. Structured systems encourage writing a few more words, making things more tedious for everyone around.

Software Tools

  • Automatic analysis. Tools like the Microsoft Threat Modeling tool can give you a baseline set of threats to which you add detail. Structure is a tremendous aid to getting things done, and in threat modeling, it helps in answering “what could go wrong?”
  • Authority/decidedness/fixedness. This is the other side of the discovery coin. Sometimes, there are architectural answers, and those answers are reasonably fixed. For example, hardware accesses are mediated by the kernel, and filesystem and network are abstracted there. (More recent kernels offer filesystems in userland, but that change was discussed in detail.) Similarly, I’ve seen large, complex systems with overall architecture diagrams, and a change to these diagrams had to be discussed and approved in advance. If this is the case, then a fixed diagram, printed poster size and affixed to walls, can also be used in threat modeling meetings as a context diagram. No need to re-draw it as a DFD.
  • Photographs of whiteboards are hard to archive and search without further processing.
  • Photographs of whiteboards may imply that ‘this isn’t very important.” If you have a really strong culture of “just barely good enough” than this might not be the case, but if other documents are more structured or cared for, then photos of a whiteboard may carry a message.
  • Threat modeling only late. If you’re going to get architecture from code, then you may not think about it until the code is written. If you weren’t going to threat model anyway, then this is a win, but if there was a reasonable chance you were going to do the architectural analysis while there was a chance to change the architecture, software tools may take that away.

(Of course, there are apps that help you take images from a whiteboard and improve them, for example, Best iOS OCR Scanning Apps, which I’m ignoring for purposes of teasing things out a bit. Operationally, probably worth digging into.)

I’d love your thoughts: are there other advantages or disadvantages of a whiteboard or software?

6 comments on "Diagrams in Threat Modeling"

  • Alun Jones says:

    As much as I love the MS Threat Modeling Tool (and thank you, Adam, for your part in creating it!) it has a tendency to cause developers to stop thinking about threats. They’ll create the Data Flow Diagram, complain bitterly about the threats generated (both the number of threats, and the individual threats they disagree with / misunderstand), but then they’ll skip the process of actually thinking about threats that aren’t automatically detected.

    • admin says:

      Yes, and you’re welcome! The way I think about this is ‘can/will they think about threats without the auto-generation?’ So in a sense, the autogen is a mitigation to the problem that they won’t find threats without it. If you have an alternate mitigation (say, training or review) then it’s less needed.

  • Pingback: New PM Articles for the Week of September 5 – 11 - The Practicing IT Project Manager [http://blog.practicingitpm.com/2016/09/11/new-pm-articles-week-september-5-11/ no longer works]
  • Stephen de Vries [http://iriusrisk.continuumsecurity.net/ no longer works] says:

    Hi Adam,

    Thanks for a comprehensive post on this topic! I would also stress that there aren’t just two approaches here, the third option is a combination of tooling and diagramming. Tooling (in particular auto-generation or enumeration of threats, through templates or checklists) – can free up valuable time that we can then spend on the more interesting threats using diagramming and workshops.

    Another danger of tools that auto-generate threats is that the limitations in that generation are not taken into account further down the risk-management process. I.e. a threat model is auto-generated without much thought, relying only on the tool and just before release the risk owner asks: “Have we done a threat model?”, and the security team answers “Yes”. When really, they should answer: “Yes, we’ve done a fully automated model without any further manual analysis” – so that the risk owner is fully informed about what exactly they’re accepting.

    An area where I think tooling is absolutely essential is in managing the risks after they’ve been created. Have the countermeasures been implemented? If not, what is their progress? Have we tested the countermeasures and the vulnerabilities?
    Continuity from threat model generation, through build and test is something which is very poorly managed in my experience.

    I’ve written some more thoughts on this topic up on the Continuum Security blog here: https://www.continuumsecurity.net/2016/09/11/Scaling-Threat-Modeling-wtih-tools.html [link no longer works] which goes into more depth about the value proposition of tooling.

  • Brian Beyst says:

    I think many security professionals who are neck-deep in threat modeling are missing the point. The reason organizations develop a threat modeling initiative is NOT to come up with a better tool or process for making visual representations of systems. The point of threat modeling is to identify potential risks and contextually prioritize those risks based on the organization’s risk tolerance and ERM policy. If 65% of the cost associated with a compromised digital asset are associated with business impact, how does the better DFD based Microsoft’s TMT contribute to the amelioration of that impact?

    • admin says:

      Of course, the goal is to find the issues so you can address them appropriately. A better DFD makes it more likely you’ll discover the issue.

Comments are closed.