Cybersecurity Lessons from Star Wars: Blame Vader, Not the IT Department
In “The Galactic Empire Has Terrible Cybersecurity,” Alex Grigsby looks at a number of high-profile failures, covered in “A New Hope” and the rest of the Star Wars canon.
Unfortunately, the approach he takes to the Galactic Empire obscures the larger, more dangerous issue is its cybersecurity culture. There are two errors in Grigsby’s analysis, and they are worth examining. As Yoda once said, “Much to learn you still have.”
Grigsby’s first assumption is that more controls leads to better security. But controls need to be deployed judiciously to allow operations to flow. For example, when you have Stormtroopers patrolling in the Death Star, adding layers of access controls may in fact hamper operations. The Shuttle with outdated keys in Return of the Jedi shows that security issues are rampant, and officers are used to escalations. Security processes that are full of routine escalations desensitize people. They get accustomed to saying OK, and are thus unlikely to give their full attention to each escalation.
The second issue is that Grigsby focuses on a few flaws that have massive impact. The lack of encryption and problematic location of the Death Star’s exhaust port matter not so much as one-offs, but rather reveal the larger security culture at play in the Empire.
There is a singular cause for these failures: Darth Vader. His habit of force choking those who have failed him. The culture of terror that he fosters prevents those under his command from learning from their mistakes and ensures that opportunities for learning will be missed; finger-pointing and blame passing will rule the day. Complaints to the Empire’s human resources department will go unanswered and those who made the complaints probably go missing.
This is the precise opposite of the culture created by Etsy—the online marketplace for handmade and vintage items (including these Star Wars cufflinks). Etsy’s engineers engage in what they call “Blameless Post-Mortems and Just Culture,” where people feel safe coming clean about making mistakes so that they can learn from them. After a problem, engineers are encouraged to write up what happened, why it happened, what they learned, and share that knowledge widely. Executives are committed to not placing blame or finger pointing.
The Empire needs a better way to deal with its mistakes, and so do we. Fortunately, we don’t have to fear Lord Vader and can learn from things that have gone wrong.
For example, the DatalossDB [link to http://www.datalossdb.org/statistics no longer works], a project of the non-profit Open Security Foundation, has tracked thousands of incidents that involve the loss, theft of exposure of personally-identifiable information since 2008. The Mercatus Center has analyzed Government Accountability Office data, and found upwards of 60,000 incidents per year for the last two years. Sadly, while we know of these incidents, including what sorts of data was taken and how many victims there were, in many of them, we do not know what happened to a degree of detail that allows us to address the problem. In the first years of public breach reporting (roughly starting in 2004), there were a raft of breaches associated with stolen computers, most of them laptops. All commercial operating systems now ship with full disk encryption software as a result. But that may be the only lesson broadly learned so far.
It’s easy to focus on spectacular incidents like the destruction of a Death Star. It’s easy to look to the mythic aspects of the story. It’s harder to understand what went wrong. Was there an architect who brought up the unshielded thermal exhaust port vulnerability? What happened to the engineering change request? What can we learn from that? Did an intrusion detection analyst notice that unauthorized devices were plugged into the network? Were they overwhelmed by a rash of new devices as the new facility was staffed up?
Even given the very largest breaches, there is often a paucity of information about what went wrong. Sometimes, no one wants to know. Sometimes, it’s a set of finger-pointing. Sometimes, whatever went wrong happened long enough ago that there are no logs. The practice of “Five Whys” analysis is rare.
And when, against all odds, an organization digs in and asks what happened, the lawyers are often there to announce that under no circumstances should it be shown to anyone. After all, there will be lawsuits. (While I am not a lawyer, it seems to me that such lawsuits happen regardless of the existence or availability of a post-mortem report, and a good analysis of what went wrong might be seen as evidence of a mature, learning practice.)
What does not happen, given our fear of lawsuits and other phantom menaces, is learning from mistakes. And so R2-D2 plugs into every USB port in sight, and does so for more than twenty years.
We know from a variety of fields including aircraft safety, nuclear safety, and medical safety that high degrees of safety and security are an outcome of just culture, and willingness to discuss what’s gone wrong. Attention to “near misses” allows organizations to learn faster.
This is what the National Transportation Safety Board does when a plane crashes or a train derails.
We need to get better at post-mortems for cybersecurity. We need to publish them so we can learn the analysis methods others are developing. We need to publish them so we can assess if the conclusions are credible. We need to publish them so we can perform statistical analyses. We need to publish them so that we can do science.
There are many reasons to prevaricate. The First Order — the bad guys in The Force Awakens — can’t afford another Death Star, and we cannot afford to keep doing what we’ve been doing and hoping it will magically get better.
It’s not our only hope, but it certainly would be a new hope.
(Originally appeared on the Council on Foreign Relations Net Politics blog.)