Lunar Rover Vehicle, Redux
What can the moon buggy teach us about modeling?
While I'm talking about the Lunar Rover, I want to tell a tale of two models. One you've met: the Lego model. The other is a model, currently on display at the Museum of Flight in Seattle. (The Museum has a longer blog post that implies the one on display is a training mock up.) The two models could barely be more different: One weighs a few pounds and can be held in your hands. The other probably weighs a few hundred. One is very high fidelity, the other quite low. One you can play with... the other one I didn’t check, but there is a fence around it.
When we say “All models are wrong, and some models are useful,” we have to understand the goals of a model. One of these was used to plan a moon mission, the other is mostly fun, and also educational. (For example, the Lego model has stickers marked “wax”, because you can’t use oil as a lubricant in a vacuum. The liquid evaporates, and you probably don’t want to be spraying graphite around and contaminating samples.)
Being explicit about the goals of a model means you can tune your engineering work to maximize the return on investment. Even when the goal is “analyze this system,” or “get everyone to understand the design,” being explicit can let you loop back to that goal to assess if you’re spending the right amount of work.
One weighty goal that’s quite hard to balance with other goals is that the model accurately account for gravity off Earth. For space, NASA has the Neutral Buoyancy Lab. There are Mars Rover models that only simulate the weight of a rover on Mars. That is, they’re tuned for that purpose and nothing else. In the case of the moon buggies, designed to carry “almost four times their weight” (according to the Museum of Flight blog) or twice (according to Wikipedia), that means that the model has to be almost entirely focused on wheels and chassis and leave everything else out. Looking at some other photos I took that show what look like real toggles, I think the museum has either the human factors model or the one-g trainer. (Wikipedia has a list of the eight full scale models.)
Last week in a threat modeling training, we had someone spend nearly an hour crafting a beautiful DFD of a fake system that we use in our trainings. It was, admittedly, a nice diagram. Elements were grouped, nicely arranged, good labels... and he spent an hour on it. Other people kept calling it “the better diagram,” and I pushed back: it’s not better on some universal scale. It was a different return on investment than other people had made.
Similarly, as much as I wouldn’t mind having my own Lunar Rover, storing it would be a pain. We need to consider the properties that we need in a given model.
Sometimes the models we make in threat modeling leave a great deal out: we might want an infrastructure model that focuses on the infrastructure. Including software components can make for an overwhelming diagram. We might create a diagram focused on a specific scenario, like upgrading or canceling an account that digs into components that aren’t otherwise prioritized.