Shostack + Friends Blog

 

Learning Lessons from Aviation

The definition of insanity is doing the same thing over and over and expecting different results. We can do better, and a major new report explains how. Cover of a workshop report: learning from cyber incidents

For literally 30 years, people have been talking about the idea of a “cyber NTSB.” Unfortunately, most people have stopped at the metaphor, and so has the idea. And so we see the same problems impact system after system, organization after organization, year after year. And rather than learning from these incidents so we can do better, we blame the victims. We insist that they should read through the dozens of standards, apply risk management techniques, select their defenses and somehow defend themselves from persistent attackers. We know how that's working out.

Rob Knake and I have been working for the last several years on a project to adopt learning models from aviation to cyber. We had planned to convene a workshop in early 2020, and the pandemic forced us to delay and then move online, but that allowed us to bring together over 70 experts, including leaders from the NTSB and ASRS over an extended period earlier this year, and we learned so much that it's only now that the report is ready for release. I also want to fully acknowledge Tarah Wheeler, our third author on the report.

The report is almost certainly the fullest investigation into what an NTSB for cyber might be, and as we were preparing the report, the May Executive Order on Cyber security was released, including a Cyber Safety Review Board. (Steve Bellovin and I shared our thoughts on that in Lawfare in June.) Because we had this amazing confluence of a convening of experts and the Order, we went beyond our mandate, and mined the discussions and our notes for a section of recommendations for the nascent board. I was excited to see that reach CISA over the weekend:

Since the report was for the National Science Foundation, we also cataloged questions that are worthy of further scientific study, and collected over 50 research questions that we hope will be pursued.

Our major findings are excerpted below — each is further explained in the executive summary:

  • Third party and in-house investigations are no substitute for objective, independent investigations.
  • Companies are unlikely to fully cooperate under a voluntary regime.
  • Product, tool, and control failure must be identified in an objective manner.
  • Findings may be sensitive but should be disseminated as widely as possible.
  • Fact finding should be kept separate from fault finding.
  • “Near Miss” reporting can complement incident investigations.

We're grateful to Harvard's Belfer Center, the National Science Foundation, the Hewlett Foundation, and Northeastern's Global Resillience Institute for support, and to all of the workshop participants.

Lastly, I want to close this post with the closing words of the report:

Secret knowledge is mysticism, not science or engineering. We heard a great deal in our workshop about how various groups have access to useful data which drives decisions that they believe are good. Yet the decisions they come to are different, which has a cost both to those trying to comply with the advice, and in the credibility of the advice. There are certainly challenges: informing opponents, ranging from threat actors to lawyers, of what you know can be worrisome. Subjecting one’s reasoning to criticism is scary. It is also a constant in fields with high rates of engineering success, ranging from bridge building to medical device manufacture. The consequences for leaving the field of cybersecurity in a prolonged adolescence are now too great; it’s time for us to grow up.