Shostack + Friends Blog Archive

 

Rich Mogull's Divine Assumptions

Our friend Rich Mogull has an interesting post up on his blog called “Always Assume“.  In it, he offers that “assumption” is part of a normal scenario building process, something that is fairly inescapable when making business decisions.  And he offers a simple, pragmatic process for assumptions which is mainly scenario development, justification, and action.    Rich’s process looks like this:

  1. Assumption
  2. Reasoning: The basis for the assumption.
  3. Indicators: Specific cues that indicate whether the assumption is accurate or if there’s a problem in that area.
  4. Controls: The security/recovery/safety controls to mitigate the issue.

Nothing earth shattering here.  And like much of Rich’s work, there is an elegance, almost a minimalism to what he offers.

JUST BECAUSE I CAN’T LEAVE WELL ENOUGH ALONE….

What immediately struck me was how similar Rich’s assumption was to a little something I like to call “scientific method”.  In scientific method, we essentially have (the following shamelessly pasted from Wikipedia):

So if we were to add to Rich’s assumption process above, we’d simply add the “experiments” bits up there.  If we’re building controls in like Rich’s examples in his blog post, we might try a “test” that “penetrates” those controls (or, as I believe Richard Bejtlich smartly tries to get us to say, perform “Adversary Simulation”).

Also, though it will probably sour his stomach a bit, we’d also probably want to make Rich’s assumption steps a hamster-wheel-of-pain(TM) by suggesting that since every so often, the threat landscape will change which will challenge our assumptions/conclusions/hypothesis and so re-testing is necessary.

IF I HAD ANY INDICATION…

Rich does have a certain “informality” around his evidence “indication” step that I’d like to build upon.  Let me offer that when discussing probability of failure in a complex IT system, there are only four basic categories of information indicators we need to consider in Information Assurance/Security/Risk Management/Protection/Whatever.  There might be evidences around:

  • Assets (the things we want to protect and their state)
  • Threats (the things that want to harm our assets and their state)
  • Controls (the things that resist the threats and their state)
  • Impacts (the things that will happen if we are unable to resist the threat)

And if you’re going to look for clues to suggest whether there might be a problem, look no further than these basic categories for evidence.  If you’d like, you can build structure around what “state” means for each category and further develop taxonomies and metrics and whatnot.  That’s the fun bits and I’ll let you be creative rather than write too much this morning.

Note that where these categories applied to Assumption may break down is in discussing management capabilities (are we operating well enough and so forth).  Rich’s assumptive process (must.resist.urge.to.make.acronym – RAP) can certainly be used here, I’m just not sure if there wouldn’t be a better taxonomy of indicators.