Shostack + Friends Blog Archive

 

Yet More On Threat Modeling: A Mini-Rant

Yesterday Adam responded to Alex’s question on what people thought about IanG’s claim that threat modeling fails in practice and I wanted to reiterate what I said on twitter about it:

It’s a tool! No one claimed it was a silver bullet!

Threat modeling is yet another input into an over all risk analysis. And you know what? Risk analysis/Risk management, whatever you want to call it won’t be perfect either. Threat modeling is in itself a model. All models are broken. We’ll get better at it.

But claiming that something is a failure because it’s not perfect and that it doesn’t always work, is one of the cardinal sins of infosec from my perspective. Every time we do that, we do ourselves and our industry a disservice. Stop letting the perfect be the enemy of the useful.

7 comments on "Yet More On Threat Modeling: A Mini-Rant"

  • Greg Christopher says:

    I couldn’t agree more. I’m sure that for some organizations this bears more fruit than others. I’ve never seen it NOT show up something though and I’ve seen it expose absolutely major design issues.

    One of the least mentioned benefits is that it provides an opportunity for different engineers on the same team to actually understand how all the pieces work. Sometimes knowing that is very important to how you implement your own piece of the puzzle. We are very specialized and compartmentalized; threat modeling helps some there.

  • @David and @adam,

    As my exposure to threat modeling has been somewhat limited, I will admit my prior comments were uninformed. I’ll concede that *some* threat modeling methodologies are effective tools, and specifically concede that Microsoft’s STRIDE approach does appear to be useful for identifying security issues.

    I maintain however, that there is a core flaw to all threat modeling: Brainstorming threats. I strongly believe that because of our cognitive errors in estimating risk, brainstorming threats is a mistake, and will inevitably lead to guessing what the threats will be, guesses that are at best only slightly better than random chance.

    In other words, threat modeling can be helpful, but we need to find a better way, that doesn’t require us to brainstorm. Imagining the threats begets imaginary threats.

    To adam’s post: there is trouble with threat modeling, and I would argue that lack of experience, and lack of strong threat modeling methodology make the ‘imaginary threat’ problem worse, something I’ve experienced first-hand, most often with junior information security professionals.

  • Christoffer Strömblad says:

    @David: Amen brother, couldn’t have said it better myself.

    @John: No, you’re in the wrong about this. Brainstorming threats is exactly what makes it powerful. The problem is not your chosen way of discovering threats, it’s (a) thinking that methodology is the solution and (b) not realizing that the people participating are absolutely key to the end result.

    It’s the collective competence that will provide the answers, that will allow the brainstorming methodology to become powerful. Would you argue that a chainsaw is useless because the three year old you gave it too can’t handle it? I doubt it. It’s about giving competent people the right tools, and brainstorming happens to be one of those good tools that in the right hands can be extremely powerful.

    You even slightly explain this problem yourself in your last sentence; junior information security professionals. They don’t have the necessary experience nor the competence to use the tools appropriately. The tools are not to be blamed for lack of skill using them.

    TL;DR: Chosen methodology shouldn’t be blamed for poor results, participants should be.

  • Adam says:

    Christopher,

    A focus on brainstorming is a very narrow focus. See the “Experiences” paper in my post.

    Claiming that you simply need more senior people is not scalable, or a good use of the time of the most senior people.

  • Adam says:

    John,

    I love your point about cognitive errors, and would add that priming and attachment biases are likely present (it would make a fine study). A goal of methodologies should be to help re-ground threat modelers into the needed context and to overcome biases.

  • Sven Türpe says:

    John,

    There may be all kinds of cognitive errors, biases, and distortions. But at the same time our brains are really good at building statistical models of the inputs they receive over time. Does it seem natural to you too lock up a bicycle (or car, or office, or house) upon leaving it, but bizarre to do the same with a water bottle? Would you call my threat model biased if I told you that I would fear bears more than dogs and dogs more than ducklings? Do you think it should raise a few eyebrows if I told you you that consider downloading malware from the Internet a more probable attack than somebody breaking into my apartment to install malware on my computer?

    Surely you’ll find counterexamples. However, could it be that our cognitive models are right more often than not as long as we receive undistorted information, and that cognitive biases and errors are the exception rather than the norm? I’m not familiar with the state of the art of psychological research in this respect, but my impression is that our gut feeling often leads us in the right direction.

Comments are closed.