Bejtlich gets it: It's about empiricism
When he mentioned my post he cited a new paper titled A Case of Mistaken Identity? News Accounts of Hacker and Organizational Responsibility for Compromised Digital Records, 1980–2006 by Phil Howard and Kris Erickson. Adam highlighted this excerpt
60 percent of the incidents involved organizational mismanagement
as a way to question my assertion that insiders account for fewer intrusions than outsiders.
At the outset let me repeat how my favorite Kennedy School of Government professor, Phil Zelikow, would address this issue. He would say, “That’s an empirical question.” Exactly — if we had the right data we could know if insiders or outsiders cause more intrusions. I would argue that projects like the Month of 0wned Corporations give plenty of data supporting my external hypothesis, but let’s take a look at what the Howard/Erickson paper actually says.
I think Richard’s analysis (“Exaggerated Insider Threats“) is spot on, and I admit to slightly twisting Howard and Erickson’s words a little to make a point. Security is all about the empirical questions. Answering them involves having data, having collection methodologies, and having conversations and debates about their validity. As I say in the PDF version of the talk:
We can use data to answer questions, like what fraction of incidents are
caused by insiders? This has long been contentious, but if we can agree
on what an incident is, what an insider is, and what cause is, we can
learn something.
One question for Richard. You write:
In brief, this report defends the insider threat hypothesis only in name, and really only when you cloak it in “organizational ineptitude” rather than dedicated insiders out to do the company intentional harm.
Why should I care about motives? Shouldn’t I be first focused on the insider/outsider question, then on the methodology, and only then on the motives?
Are you assuming that you should care about insider/outsider due to the techniques they might use, the potential access points on the network, or the fact that they might have some enhanced knowledge of systems/data/information?
Pete,
It’s about being able to settle this debate. How the data gets used is secondary.
Self-inflicted wounds can be the most deadly, unfortunately. There’s always a balancing act between convenience and security/privacy and usually convenience wins (at least in business and academia). Until security is built in by design, rather than as an afterthought, gaffes that lead to compromises and data breaches will continue. Network security will evolve one way or another. Perhaps users will become smarter and more security savvy. Who knows what the future holds?
Imagine you have nailed the insider/outsider question, solid. Say, 67.4% versus 32.1% and some spillage.
So what?
Generally, all questions reduce to money: where do I lose money. Only in the context of how much money is lost by the threats does any question make sense.
I would suggest that the money lost according to each American PII set that is compromised by an external hacker is in a tiny infinitesimal region … and the loss from insider compromises much higher. (But I have no data to back that up!)
(The money lost for a European compromise of PII by an external hacker is much higher again, so the analysis changes depending on where you are.)
Iang:
What is the reasoning behind: “The money lost for a European compromise of PII by an external hacker is much higher again”?
I forget the precise details, but as it is explained to me, under the European Data Directive, it’s either a 25k euros fine or a 50k euro fine for each individual PII that is lost.
In simple terms: lose your database, file for bankruptcy.
(I should check the details though…)
On digging a hole for oneself with data: Here’s some commentary on costs at a per record level. Snippets on FC of course.