Doing threat intelligence right
From a great article by Robert Jervis, professor of international politics at Columbia University:
The problem isn’t usually – or at least isn’t only – too little information, but too much, most of it ambiguous, contradictory, or misleading. The blackboard is filled with dots, many of them false, and they can be connected in innumerable ways. Only with hindsight does the correct pattern leap out at us, and to fix what “broke” the last time around only guarantees you have solved yesterday’s problem.
Far more important, and useful, is to address the flaws in how we interpret and use the intelligence that we already gather. Intelligence analysts are human beings, and many of their failures follow from intuitive ways of thinking that, while allowing the human mind to cut through reams of confusing information, often end up misleading us. This isn’t a problem that occurs only with spying. It is central to how we make sense of our everyday lives, and how we reach decisions based on the imperfect information we have in our hands. And the best way to fix it is to craft policies, institutions, and analytical habits that can compensate for our very understandable flaws.
[…]
The first and most important tendency is that our minds are prone to see patterns and meaning in our world quite quickly, and then tend to ignore information that might disprove them. Premature cognitive closure, to use the phrase employed by psychologists, lies behind many intelligence failures.
[…]
Second, people pay more attention to visible information than to information generated by an absence. In a famous Arthur Conan Doyle story, it took the extraordinary skill of Sherlock Holmes to see that an important clue in the case was a dog not barking. The equivalent, in the intelligence world, is information that should be there but is not.
[…]
Third, conclusions often rest on assumptions that are not readily testable, and may even be immune to disproof.
I’ll add a fourth — ignoring threat intelligence all together or treating it as taboo. This may take several forms: “it’s beyond our control”, “we don’t have good data”, “it’s too hard to quantify”, “we aren’t paid for guess-work”, “we rely on vendors for that”, “everybody knows what the threats are”, “if we bring it up, we will get too many questions we can’t answer”, or other excuses. (See Josh Corman’s post [link to http://www.the451group.com/report_view/report_view.php?entity_id=60884 no longer works] on the folly of relying on security vendors for your threat intelligence. Vendors only have incentive to inform you about threats they can mitigate.)
If you want a good methodology for threat intelligence, look at Intel’s [link to http://download.intel.com/it/pdf/Prioritizing_Info_Security_Risks_with_TARA.pdf no longer works]. It was adapted for use by the Information Technology Sector Coordinating Council in their risk assessment for critical IT industry infrastructure.
As good as it is, it could even be better if they had some systematic methods to actively seek out contradictory information and contrary hypotheses about threats. One simple way to do this is to create a “Mental Model Red Team” whose primary job is to disprove everything you think you know, or at least generate and validate contrary hypotheses. (For social and cultural reasons, you should probably rotate your staff through this team rather than keeping the team membership fixed.) Formal methods exist, including “Analysis of Competing Hypotheses” (slides [link to http://www.au.af.mil/au/awc/awcgate/ccrp/2006iccrts_countering_decep_slides.pdf no longer works]). (I’m in the process of evaluating a tool for this called SHEBA. [link to http://web.me.com/skjpope/sheba/ no longer works] I hope to have a demo read for Mini-metricon, something like this [link to http://files.me.com/skjpope/o4f0te.mov no longer works].) Another possible method is prediction markets, but I’ve never seen them used for this purpose.
What makes you think the TARA model is good? I found it a little sparse on key details.
Yes, it’s sparse on some details. I suspect that the public document doesn’t disclose all the details of the method.
What I like about it is that it seems very well designed for the intended purpose and the intended users. Matt and his team are very pragmatic, so I would expect them to get this right.
As I understand it, the intended purpose is to get decision-makers to think about their security programs and policies in new ways — “out of the box” (yes, a tired cliche, but it fits here). By enumerating threat agents, their objectives, their preferred methods, etc. the decision-makers and designers can look at their systems from the attackers point of view.
The other intended purpose is prioritization — Which threats and threat agents really deserve our attention? TARA helps pool knowledge of threat agents across Intel so that each group doesn’t need to start from scratch, and each group can act as peer influence on the others — a form of “collective wisdom”.
The intended users are *not* risk intelligence specialists, so it needs to be simple and it needs to feel right.
It also provides transparency about decisions and why the team did or did not drive decisions based on certain threats and scenarios.
There are a few things missing. First, there is no mention of actually collecting data, internally or externally, to calibrate the relative “risk” of each threat agent in each business situation or unit. You’d really want a data feed from forensic investigations, log analysis, or other data that might inform the relative likelihood of threat agent actions. This is especially important to track strategic changes and innovations in threat agents, especially new synergies, collaborations, or actual integration of previously separate threat agents.
Second, I wouldn’t call TARA a full-blown threat intelligence system because it doesn’t have any explicit methods for incorporating new information and revising threat models in real time. This level of sophistication may or may not be necessary at Intel, but it would certainly be necessary at any “critical infrastructure” or “national defense” organization.
Third, combining 1) and 2), you’d want systematic methods for learning and reasoning about uncertainty. This would tell you where you need to invest to learn, either through more data collection, controlled experiments, information sharing with other organizations, or other methods. All of these are costly and would require some explicit justification.