Rational Ignorance: The Users' view of security
Cormac Herley at Microsoft Research has done us all a favor and released a paper So Long, And No Thanks for the Externalities: The Rational Rejection of Security Advice by Users which opens its abstract with:
It is often suggested that users are hopelessly lazy and unmotivated on security questions. They chose weak passwords, ignore security warnings, and are oblivious to certicates errors. We argue that users’ rejection of the security advice they receive is entirely rational from an economic perspective.
And you know it’s going to be good when they write:
Thus we find that most security advice simply offers a poor cost-benefit tradeoff to users and is rejected. Security advice is a daily burden, applied to the whole population, while an upper bound on the benefit is the harm suffered by the fraction that become victims annually. When that fraction is small, designing security advice that is beneficial is very hard. For example, it makes little sense to burden all users with a daily task to spare 0.01% of them a modest annual pain.
People are not stupid. They make what we, as relative experts on the topic of security, perceive to be bad decisions, but this paper argues that their behavior is rational.
[W]e argue for a third view, which is that users’ rejection of the security advice they receive is entirely rational from an economic viewpoint. The advice oers to shield them from the direct costs of attacks, but burdens them with increased indirect costs, or externalities. Since the direct costs are generally small relative to the indirect ones they reject this bargain. Since victimization is rare, and imposes a one-time cost, while security advice applies to everyone and is an ongoing cost, the burden ends up being larger than that caused by the ill it addresses.
The paper provides both a good and accessible overview of externalities and rational behavior using spam as an example.
For example, Kanich et al. [32] document a campaign of 350 million spam messages sent for $2731 worth of sales made. If 1% of the spam made it into in-boxes, and each message in an inbox absorbed 2 seconds of the recipient’s time this represents 1944 hours of user
time wasted, or $28188 at twice the US minimum wage of $7.25 per hour.
Coincidentally, we get a little over 300 million spam messages into our corporate email gateways every month, which means that I can compare the cost-per-delete-click (at $7.25/hour) against the cost of our corporate spam filtering contract without having to do any real math. Since we pay about $50,000/month for filtering. That means that we’re getting a pretty good deal, since our white-collar employees cost over $14/hour.
That’s just time that would be spent seeing and deleting the message, don’t forget. Fourteen Dollars per hour completely ignores the cost of attention disruption (much more than two seconds) and the Direct Losses, either because I cannot quantify, which causes the entire argument to appear specious in the eyes of Senior Leadership, or I am not at liberty to disclose enough detail to pass the “cannot quantify” test.
They then go on to document in fairly accessible models why password complexity, anti-phishing awareness, and SSL Errors are cost-inefficient, and get into a favorite topic of mine, the difficulty of defining security losses or the benefit from adding safeguards at the end-user level. This section should be mandatory reading for any security person who attempts to talk to non-security people about the topic–i.e. all of us.
What’s missing from the paper, though, is the next logical step of analysis, the appropriate Risk Management strategy in response to the information presented. Hopefully that will be the follow-on paper, because as it was, it felt like a bit of a cliff-hanger to me. All of the discussion assumes that mitigation is the only option. This may feel right from a Security perspective, but it’s probably not the correct risk management decision.
To manage the risk in these cases, though, I see a strong argument for risk transfer. High-Impact, Low-Likelihood events are best managed by aggregating the risk into a pool and spreading the cost across the pool, i.e. buying insurance against these losses. If you could buy anti-phishing insurance for $1/person/year (which, realistically, is multiples of what it could cost if 200 million people all bought in) rather than throwing large, uncoordinated piles of money at ineffective awareness training or technical countermeasures which will probably be out-innovated by the attackers in hours or days, why wouldn’t you?
Why have anti-virus vendors not thought of this? If your AV vendor said they would also insure you against Direct Losses (having your bank account cleaned out) for your $50/year subscription, would that differentiate them enough to win your business?
By all means, we should continue to work on the challenges of improving the security experience and reducing the risk of using computers. More accurately, though, we should be reducing the amount that must be experienced by users at all to improve security of their information and transactions.
Isn’t FDIC insurance essentially anti-phishing insurance? This type of insurance is also built into credit cards. Which leads to users being more careless with their credit card data, which leads to increased costs for the merchants, which leads them to complain about fraud, which leads the credit card companies to throw large amounts of cash at counter-measures. Right?
It will be interesting to see the differences in responses to attacks against insured/regulated consumer accounts and un-insured accounts such as corporate banking or line-of-credit accounts. Attacks against the latter have produced law suits.
This paper looks very interesting. I have stored it for upload into my kindle, which is rapidly becoming like a folder of unread browser tabs.
@Nick
Sort’ve…
The FDIC transfers risk of bank failure from the bank customer to the FDIC, so it is, as the name indicates (“Federal Deposit Insurance Corp.”) absolutely a risk transfer via insurance. The FDIC was created (as part of Glass-Steagall), after all, to restore public confidence in the banking system following the bank crashes of the Great Depression.
Credit Card Liability, on the other hand, is regulation in the most conservative definition of the term (“the purpose of regulation is to ensure that externalities are placed back onto their creators.”). Credit Card liability limits ensure that the card issuers, merchants, and processors have solid financial motivation to exercise a proper standard of Due Care, even in the face of potentially cost-ineffective security measures.
In the UK, for example, so long as account holders were liable for fraud committed against them via ATM machines, the banks and ATM network operators saw no need to act, since they had no Direct Losses. It was not until the law changed and insulated the consumer from losses, placing responsibility on the operators for ensuring proper security of the ATM network that fraud was reduced.
@Chandler,
Thanks for the flattering comments, glad that someone finds this analysis interesting. I completely agree that the full spectrum of alternatives to technical training, such as risk management and pooling, are very promising. This paper was just an initial stab at pointing out that the cost-benefit tradeoff for users is off, and if we want that to change we have to offer something different.
@Nick
In the US consumers are protected by Regulation E of the Federal Reserve. This appears to cover transfers except by check or CC. Consumer liability is $50, but the reporting requirements appear tighter than for CC protection.
http://www.fdic.gov/regulations/laws/rules/6500-3100.html#fdictail
@Cormac
I’m happy to see reasoned analysis along with easily-comprehended models that may help the profession understand that “Security” is not an inherent good and thus worth any cost. Only when Security Types recognize that fact at a cultural level will the profession be able to mature and actually produce net benefit.
Chandler, nice post and I need to read the paper.
I am starting to think that risk management, in my experience, is focussing on the wrong end of the process – on risk identification and assessment. It seems that given we have have risks, and some possible actions, the important skill comes in the cost-benefit analysis of potential actions and non-actions. If you read environmental risk assessments or impact studies, finding risks is no problem. The real issue is the policy or decision to be taken amongst competing alternatives, and why.
And CBA is a big area that is absent from IT Security from what I can see.
rgs Luke
…as if getting the cooperation of users and Management for simple security practices wasn’t enough of a battle already.
I would submit that 1) this article merely panders to mental-laziness, and 2) mis-casting user resistance to understanding security issues as “rational rejection of security advice” is simple pandering. This sort of framing undermines the efforts of those who are tasked with securing information systems. Thanks.
Are _some_ password policies extreme? Sure. However, occasional password changes, and relatively minor character-selection requirements are far more common, and they’re not that challenging. Password management software even simplifies dealing with multiple passwords. As this article demonstrates, the issue can be painted in a harsher light, but… I have a pre-teen who deals with this better than the adults this article is pandering to. Is being less cooperative and understanding than an adolescent something to aspire to, or having achieved it, to gloat about?
“‘Security’ is not an inherent good”? Without engaging in semantic bickering, I’d argue that the ubiquitousness of that perspective is a factor in the lop-sided power ratio between attackers and defenders in InfoSec today. Again, thanks for the help.
The straw-dog argument that security practitioners as a group suggest security is “worth any cost” deserves a lit match:
Since InfoSec issues can’t be reduced to simplistic cost-benefit number-juggling, we’re constantly struggling for any dollars at all. This, while the bad-guys can consistently out-spend the good-guys, which among other things allows them to attract lots of very talented (albeit mis-guided) individuals. We’re out-gunned, out-spent and out-manned… and getting fired on from both sides. It’s a fascinating path to walk.
The problem with cost-benefit analysis as a tool is that not everything is the kind of nail it can be hammered with. Too often, especially in information security, the scope of related costs is artificially limited, thus skewing the “analysis”. And though risk transference has its place, I would argue that all too often that’s just a thinly-veiled dodging of responsibility and/or expenditure.
Focusing purely on the local risks (i.e. a given computer or even the environment containing a given computer) has been myopic for years now. System breaches are being used increasingly as ways of “collecting” computers for attacking other systems – not just within a given environment, but outside as well. Not doing everything we can to protect the systems under our control has, in a broader social context, moved from being merely foolish to being genuinely irresponsible.
Admittedly, browsing habits are arguably orders of magnitude more problematic than password management, but that’s its own Herculean battle.
Ranting about how much simpler things should be, though emotionally cathartic, doesn’t address the real-world issues we face as global netizens. Creative cooperation and participation are encouraged and appreciated.
– Patrick
I hate the phishing emails these people appear to get more desperate by the day I recieve 2 or 3 every day and submit them to phishtrackers a site I found which lets you submit them anonymously.
I agree with many what you’re saying here but it surely could do with far more detail. – I can’t listen to that much Wagner. I start getting the urge to conquer Poland. – Woody Allen Born 1935