Shostack + Friends Blog

 

Threat Modeling and Logins, Redux

How to effectively threat model authentication. A clown checking ID

Recently, I wrote about threat modeling and logins, and I want to expand on that post to talk about methodologies. Before I do, I want to say the crucial step is consider “What can go wrong?” before implementing a defense, so that each defense is defending against a specific threat. (That implies that you need to go from consideration to keeping a list, and making sure that the list is specific and clear.)

More, there are specific methodologies you can use to make these threats more noticeable. They include:

  • Message sequence diagrams: Authentication is a bet, and the factors that influence your bet can be drawn in a message sequence diagram. I’ve drawn this with a “user,” rather than “customer” because we don’t yet know it’s a customer, or which customer it is. If I were to redraw it, I might label them visitor. We can consider what can go wrong with each bit of information passed in a message (or inferred from one, like an IP address or geolocation.) In the interest of both speed and focus, I’m not listing the many other elements that might be sent, such as user-agent, et cetera.
  • As an aside, I’m using a hand-drawn image to emphasize a point: your diagrams don’t have to be fancy. I sketched that in my notebook faster than I could open a web tool.

  • Information disclosure threats. It’s useful to consider these with the elements you factor into your decision, and it’s useful to consider where those elements are exposed. There are a few trust boundaries which are highly relevant: The customer’s email, their compute environments including mobile devices and desktops, and their browsers. Saying “an attacker can access this” leads you to consider if sending more information into that boundary can add security. In practice, most people stay logged into their email in their browser, so if they’re logging in with their browser, you should ask yourself, what are we gaining by sending them an email? This is why app or hardware based factors are so useful: they’re in their own boundary, and information disclosure requires tricking a person.
  • You can ask “is that a system we trust” as data flows through it. You don’t need to know anything about how those systems work. You can treat them as a single system, and assume it’s been compromised. I say ”Where those elements are exposed” because text messages are exposed to a tremendous amount of telephone system infrastructure, which remains notoriously insecure.

On Mastodon, Erik van Straten added some insightful commentary about other threats including that a customer won’t use the system, or that inexperienced people will get flustered. He also says “ING *could* have made their app such that, if you tap "ING is calling me, what should I do?" it would say: "ask the caller for their code, hang up the phone, tap [Call back ING] and enter the code" (calling from the right app prevents the customer from being fooled with a wrong phone number, while information identifying the customer could be transferred as well, which complicates AitM-attacks.)”

These ways of thinking and specific methodologies are magical. Not in the sense that you have to be a wizard, but in the power that they unlock for you. We routinely train people and organizations in how to make these threat modeling techniques work for them. If you’re interested, why not get in touch?

Image by midjourney: “an animated or anime or pixar officious clown carefully checking someones identity documents. They have an ID card in one hand and a checklist nearby. You can see the other person who looks exhausted, on the far side of a desk. The clown is bureaucratic, ineffective and slow. --ar 8:3” (Varied)