Friday Star Wars: Open Design
This week and next are the two posts which inspired me to use Star Wars to illustrate Saltzer and Schroeder’s design principles. (More on that in the first post of the series, Star Wars: Economy Of Mechanism.) This week, we look at the principle of Open Design:
Open design: The design should not be secret. The mechanisms should not depend on the ignorance of potential attackers, but rather on the possession of specific, more easily protected, keys or passwords. This decoupling of protection mechanisms from protection keys permits the mechanisms to be examined by many reviewers without concern that the review may itself compromise the safeguards. In addition, any skeptical user may be allowed to convince himself that the system he is about to use is adequate for his purpose. Finally, it is simply not realistic to attempt to maintain secrecy for any system which receives wide distribution.
The opening sentence of this principle is widely and loudly contested. The Gordian knot has, I think, been effectively sliced by Peter Swire, in “A Model For When Disclosure Helps Security.”
In truth, the knot was based on poor understandings of Kerckhoff. In “La Cryptographie Militare” Kerckhoff explains that the essence of military cryptography is that the security of the system must not rely on the secrety of anything which is not easily changed. Poor understandings of Kerckhoff abound. For example, my “Where is that Shuttle Going?” claims that “An attacker who learns the key learns nothing that helps them break any message encrypted with a different key. That’s the essence of Kerkhoff’s principle: that systems should be designed that way.” That’s a great mis-statement.
In a classical castle, things which are easy to change are things like the frequency with which patrols go out, or the routes which they take. Harder to change is the location of the walls, or where your water comes from. So your security should not depend on your walls or water source being secret. Over time, those secrets will leak out. When they do, they’re hard to alter, even if you know they’ve leaked out.
Now, I promised in “Star Wars and the Principle of Least Privilege” to return to R2’s copy of the plans for the Death Star, and today, I shall. Because R2’s copy of the plans–which are not easily changed–ultimately lead to today’s illustration:
The overall plans of the Death Star are hard to change. That’s not to say that they should be published, but the security of the Death Star should not rely on them remaining secret. Further, when the rebels attack with stub fighters, the flaw is easily found:
OFFICER: We’ve analyzed their attack, sir, and there is a danger.
Should I have your ship standing by?TARKIN: Evacuate? In our moment of triumph? I think you overestimate
their chances!
Good call, Grand Moff! Really, though, this is the same call that management makes day in and day out when technical people tell them there is a danger. Usually, the danger turns out to go unexploited. Further, our officer has provided the world’s worst risk assessment. “There is a danger.” Really? Well! Thank you for educating us. Perhaps next time, you could explain probabilities and impacts? (Oh. Wait. To coin a phrase, you have failed us for the last time.) The assessment also takes less than 30 minutes. Maybe the Empire should have invested a little more in up-front design analysis. It’s also important to understand that attacks only get better and easier as time goes on. As researchers do a better and better job of sharing their learning, the attacks get more and more clever, and the un-exploitable becomes exploitable. (Thanks to SC for that link.)
Had the Death Star been designed with an expectation that the plans would leak, someone might have taken that half-hour earlier in the process, when it could have made a difference.
Next week, we’ll close up the Star Wars and Saltzer and Schroeder series with the principle of psychological acceptability.
The man’s name was actually “Kerckhoffs”. It’s “Kerckhoffs’ Law”, not “Kerckhoff’s Law”.