Shostack + Friends Blog

 

A Different Hackathon Design?

What should hackathon judges value? Hackathon competitors

The Threat Modeling Connect team has built a hackathon that’s gotten a lot of enthusiastic participation over the last few years. Today I want to discuss the design of that hackathon, talk about an effect of the design and ask if we can do something different. None of this is intended to critique the organizers, participants or judges.

Last year, the prompt/rules prioritized depth, and the winning entry was delivered in a spreadsheet with ten tabs (plus an intro/overview tab), including four entire sub-analyses of what could go wrong (STRIDE, LINDDUN, Plot4ai, and “OWASP/others”). The STRIDE page shows 54 lines of ten columns.

That’s... impressive work to win a hackathon.

And I think we should think about that output, which may be seen as a model of the “best” way to threat model, or the best way to record output. I don’t think it is. There may be circumstances where a complex spreadsheet is called for, and I find it not very usable. There’s certainly circumstances where a whiteboard diagram and a discussion are better and lead to much better architectural choices.

So I want to suggest that the community talk about what good judging criteria might be. I’ll offer up the following ideas. These are intended as additions, not a comprehensive list.

  • Originality (of form, possibly other elements of originality)
  • Comprehensibility
  • Time to review
  • Unique threats found (not in any other analysis)
  • Fraction of content that’s “actionable”

The last two are intended to be in tension: If you find a lot of low-value threats to have unique ones, you’ll lose points on the actionability metric. That metric is: A judge reads the item and says “Yes, we’d do something about that.”

As I mentioned this to past judges, one of them also brought up that participants really wanted information on specific technologies in use. I think that “use up to date software” and “configure it well” are things that can happen outside of threat modeling. (Maybe threat modeling can help say “X has better authentication options than Y,” but do you need threat modeling to do that? Maybe you do. And there’s also room for business level threats — and there’s no way to find those except threat modeling.) In other words, I’d like to see more focus on what I’ve been calling inherent threats.

Update: Assume we’re working on a bike rental service. A technology agnostic threat might be “Customer can upload a picture of a different bike to pretend their bike is damaged.” I need to deal with this somehow, but I don't need to obsess over vulns in imagemagick.

Update 2: Conversations are happening at TM Connect and Linkedin. I expect that conversations are now fragmented, and I’ll keep adding links. (I wish it were easier to have blog threads without trolls and spammers.)