Shostack + Friends Blog Archive

 

Indicators of Impact — Ground Truth for Breach Impact Estimation

Ice bag might be a good Indicator of Impact for a night of excess.

Ice bag might be a good ‘Indicator of Impact’ for a night of excess.

One big problem with existing methods for estimating breach impact is the lack of credibility and reliability of the evidence behind the numbers. This is especially true if the breach is recent or if most of the information is not available publicly.  What if we had solid evidence to use in breach impact estimation?  This leads to the idea of “Indicators of Impact” to provide ‘ground truth’ for the estimation process.

It is premised on the view that breach impact is best measured by the costs or resources associated with response, recovery, and restoration actions taken by the affected stakeholders.  These activities can included both routine incident response and also more rare activities.  (See our paper [link to http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2233075 no longer works] for more.)  This leads to to ‘Indicators of Impact’, which are  evidence of the existence or non-existence of these activities. Here’s a definition (p 23 of our paper) [link to http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2233075 no longer works]:

An ‘Indicator of Impact’ is an observable event, behavior, action, state change, or communication that signifies that the breached or affected organizations are attempting to respond, recover, restore, rebuild, or reposition because they believe they have been harmed. For our purposes, Indicators of Impact are evidence that can be used to estimate branching activity models of breach impact, either the structure of the model or key parameters associated with specific activity types. In principle, every Indicator of Impact is observable by someone, though maybe not outside the breached organization.

Of course, there is a close parallel to the now-widely-accepted idea of “Indicators of Compromise“, which are basically technical traces associated with a breach event.  There’s a community supporting an open exchange format — OpenIoC.  The big difference is that Indicators of Compromise are technical and are used almost exclusively in tactical information security.  In contrast, Indicators of Impact are business-oriented, even if they involve InfoSec activities, and are used primarily for management decisions.

From the Appendix B, here are a few examples:

  • Was there a forensic investigation, above and beyond what your organization would normally do?
  • Was this incident escalated to the executive level (VP or above), requiring them to make resource decisions or to spend money?
  • Was any significant business process or function disrupted for a significant amount of time?
  • Due to the breach, did the breached organization fail to meet any contractual obligations with it customers, suppliers, or partners? If so, were contractual penalties imposed?
  • Were top executives or the Board significantly diverted by the breach and aftermath, such that other important matters did not receive sufficient attention?

The list goes on for three pages in Appendix B but we fully expect it to grow much longer as we get experience and other people start participating.  For example, there will be indicators that only apply to certain industries or organization types.  In my opinion, there is no reason to have a canonical list or a highly structured taxonomy.

As signals, the Indicators of Impact are not perfect, nor do they individually provide sufficient evidence.  However, they have the very great benefit of being empirical, subject to documentation and validation, and potentially observable in many instances, even outside of InfoSec breach events.  In other words, they provide a ‘ground truth’ which has been sorely lacking in breach impact estimation. When assembled as a mass of evidence and using appropriate inference and reasoning methods (e.g. see this great book), Indicators of Impact could provide the foundation for robust breach impact estimation.

There are also applications beyond breach impact estimation.  For example, they could be used in resilience planning and preparation.  They could also be used as part of information sharing in critical infrastructure to provide context for other information regarding threats, attacks, etc. (See this video [link to http://www.shmoocon.org/2013/videos/Shmoocon%202013%20-%20Is%20Practical%20Info%20Sharing%20Possible.mp4 no longer works] of a Shmoocon session for a great panel discussion regarding challenges and opportunities regarding information sharing.)

Fairly soon, it would be good to define a light-weight standard format for Indicators of Impact, possibly as an extension to VERIS [link to http://www.veriscommunity.net/doku.php no longer works].  I also think that Indicators of Impact could be a good addition to the up-coming NIST Cybersecurity Framework.  There’s a public meeting April 3rd, and I might fly out for it.  But I will submit to the NIST RFI.

Your thoughts and comments?

7 comments on "Indicators of Impact — Ground Truth for Breach Impact Estimation"

  • Pingback: Indicators of Impact — Ground Truth for Breach Impact Estimation | JOSHUASCOTT.NET [link http://www.joshuascott.net/blog/2013/03/18/indicators-of-impact-ground-truth-for-breach-impact-estimation/ no longer works]
  • Pingback: Network Security Blog » Network Security Podcast, Episode 306 [link http://www.mckeay.net/2013/03/19/network-security-podcast-episode-306/ no longer works]
  • Well, I read this and looked at the paper. It MAY work, but I’d rather see a set of complete examples that leads to an impact estimation. The risk I see is that the pile of IoIs you will collect in many cases still won’t be sufficient to take a sufficiently reliable guess at the actual impact.

  • only when the evidence is imperfect it facilitates reliable model estimation?

  • Russell says:

    @bangladoredatacom — I certainly didn’t mean to imply that *only* imperfect evidence facilitates reliable model estimation. Sorry if I wasn’t more clear.

    What I meant to say was that model estimation is feasible even when the evidence is imperfect or incomplete. If the evidence is complete and perfect, then model estimation should be much easier.

    All this will have to be demonstrated through cases and examples. Right now I’m just speculating because the cases are still in progress.

Comments are closed.