Shostack + Friends Blog Archive

 

Engineers vs. Scammers

Adam recently sent me a link to a paper titled, “Understanding scam victims: seven principles for systems security.”  The paper examines a number of real-world (i.e. face-to-face) frauds and then extrapolates security principles which can be applied generically to both face-to-face and information or IT security problems.

By illustrating these principles with examples taken from the British television series The Real Hustle, they provide accessible examples of how reasonable, intelligent people can be manipulated into making extremely poor (for them) decisions and being defrauded as a result. Perhaps you’re thinking, “That’s all well and good, but what do I care about other people getting scammed due to behaviors that I would like to believe I would never engage in?”

It’s a nice idea, and while it may hold true with some of them, such as the Dishonesty Principle (“It’s illegal, that’s why you’re getting such a good deal.”), many of these scams work not because the person is trying to do something sneaky, because they’re either trying to “Do The Right Thing,” e.g. The Social Compliance Principle, just get the job done, such as the Distraction Principle, or just plain get fooled, e.g. the Deception Principle,

So, like it or not, the paper would seem to tell us, you’re Damned if You Do, Damned If You Don’t, and eventually you’ll let your guard down at just the wrong time. The sub-title of the paper might as well have been, “Why your security system will eventually fail, no matter how good it is.”

So what’s the point of trying?

Well, first off, because all hope is not lost—even if you don’t read the paper, there are a number of points to consider, two of which I want to call out because they are essential to designing or analyzing a security system (or, really, a system which requires a degree of security):

• Identify the cases where the security of your system depends on an authentication task (of known people, of unknown people or of “objects”, including cash machines and web sites) performed by humans.

• Understand and acknowledge that users are very good at recognizing known people but easily deceived when asked to “authenticate”, in the widest sense of the word, “objects” or unknown people.

By understanding how and why systems fail, we can design them in such a manner as to avoid the failure. For example, never forget that authentication is really a two-way street, even most people are generally bad (at best) and oblivious (in general) to their role in the problem.

In the case of an ATM, the traditional security efforts are around protecting the ATM from malicious users. The fact that the users must also be protected from malicious ATM’s never seems to come up. Likewise, phishing and other forms of credential harvesting depend on the victim being unable to accurately authenticate the requester of their credentials, whether due to falling prey to Distraction, Deception, or Social Compliance.

By understanding this and explicitly forcing that problem to be considered “in-scope” by the system designers, we accomplish two important security goals. First, we address the fact that authentication is a two-way street, even if only it only a formal process (e.g. logging in) in one direction. Second, we expand the pool of people working on solving the problem and thus potentially creating a valuable innovation which can be applied across that problem elsewhere.

What we can’t do is take the easy way out and “blame the users.” In fact, the authors even close their paper by reminding us of this fact:

Our message for the system security architect is that it is naïve and pointless just to lay the blame on the users and whinge that “the system I designed would be secure if only users were less gullible”; instead, the successful security designer seeking a robust solution will acknowledge the existence of these vulnerabilities as an unavoidable consequence of human nature and will actively build safeguards that prevent their exploitation.

While Adam, Alex and I were discussing the paper, Adam took the bold step of declaring that,

The principles that tell an engineer what to do are better than those that tell a scammer what to do.

Personally, I’m going to confess that while I don’t disagree with his statement, I don’t think it matters, either. Engineering principles can help us make better decisions and design better systems, but unfortunately, they don’t let us make perfect decisions or design perfect solutions. Fortunately, even if they did we’d still have innovation elsewhere to create new and interesting problems which people would employ us to solve.

Regardless, there will always be an interplay of innovation and reaction on both sides of the equation—for scammers or attackers and for security engineers or defenders*. The attackers find a hole, the defenders find a way to close it or render it ineffective. So the attackers find a new hole, ad infinitum. The hole can be a weakness in an IT system, a business process, or, as is the case in many of the examples in Part One, the human brain.

Here’s where it gets tricky, though. Most defenders work for Someone Else. By that, I mean that they are employees of, either directly or by contract, of some other entity who is in the business of Getting Something Done. The entity is typically not in the business of Securing Things. Thus, the Distraction Principle is already working against us before we ever even get to work in the morning.

Next, once the asset is not something obviously valuable, such as money, people’s ability to recognize it as such fails rapidly, especially when they interact with it on a day-to-day basis. This is especially true when trying to protect trade secrets and other Intellectual Assets. I have been in meetings where Very Senior people were asked to identify the critical secrets in their branch of the organization and they were unable to do so. It’s not that they didn’t have any, it’s that they couldn’t pick them out of the crowd of their responsibilities because everyone they dealt with was also authorized to see them, so they had no recurring reminder or other filtering mechanism.  This is one of the reasons that Top Secret documents are stamped “Top Secret.”

In the examples from the paper, scammers utilize the opposite form of this problem in their principle of Deception, by convincing people that valueless things are actually, in fact, valuable—fake diamond rings, TV boxes with rocks in them, etc. In the corporate world, people don’t know, understand or remember what’s valuable, and thus are unable to properly prioritize protecting it among their other responsibilities (the Distraction Principle again).

Thus, I would argue that the issue is that people are just bad at accurately assessing value; that their ability to do so degrades over time; and that their ability is further weakened when scammers manipulate them.  Call it the Valuation Principle. This, in turn, makes them more vulnerable to a variety of ways of losing their valuables, be it cash, a car, or a Trade Secret, by application of the other Principles in the paper.

The challenge is that while it’s irrational to protect everything if only a small portion of the assets need the highest level of protection, most people (and thus, their organizations) are really bad at determining what level of security an asset actually requires (even with tools like classification and risk assessment).  Cost-effective Information Protection is as much about determining what to protect as ensuring that it’s protected.

* I assign these roles under the assumption that the defender holds an asset that an attacker wants to access or possess.