25 Years in Application Security: Looking Back, Looking Forward

 
Author's copy. Originally published in IEEE Security & Privacy, vol. 20, no. 1, pp. 109-112, Jan.-Feb. 2022, doi: 10.1109/MSEC.2021.3127961.

The author looks at 25 years of industrial application security through a lens of a code review document released in 1996, examines the progress that’s been made and what those trends imply for the future.

Twenty-five years ago, I published a set of code review guidelines that I had crafted while working for a bank.7 I released them to get feedback and advice, because back then there was exceptionally little in terms of practical advice on what we now call AppSec.

Looking back at what’s there, it’s explicitly a review document for a firewall group, taking code that’s “thrown over a wall” to be run and operated by that group. The document includes a mix of design advice, coding requirements, and operational needs. There are also administrative aspects, such as a rule that when analyses differed, where we would record least positive ones. The challenges we tackled were quite something. Before I get to those challenges, let me focus on what we did well (Table 1):

Table 1. Far and mixed, good and bad.
What the guidelines covered Innovations since
Input validation Threat modeling
Logging Safer languages and libraries
Compiler warnings and tainting Static analysis
Avoiding dangerous calls Writing safe calls
Good use of defenses like sandboxing Much better sandboxes
Fuzzing (in theory) Fuzzing practice

There’s some goodness in there...and yes, we ran Perl code on user supplied data. YOLO? Not at the time. Static analysis is limited to lint — the first security focused tools like RATS were not yet available. I had built, or was starting to build, a tool that’s been lost to time, a large shell script that used ldd and grep to find calls to dangerous functions. In hindsight, it was a small step forward. Another important step was to give specific advice about how to be safe. Rather than just saying “don’t use system()”, we gave specific advice on exec().

There was no concept that what we were doing was modeling threats, no hint toward standardizing how we got to an understanding of the code. The idea of paying a bounty on bugs was not unheard of, but the idea that a bank would do so... I don’t think it ever came up.

With hindsight, there’s some strange inconsistencies. For example, while we were aware of the dangers of privileged code, we spent a lot of time explaining how to do it right, rather than setting a rule that your code was going to run under a normal user ID instead of as a root. That was a result of the group really getting all sorts of code. This “coding safely” thing was relatively unusual and an unreasonably dark art. That is not to say there were no pockets of understanding, but rather that it was far from commercial programming practice.

Secure coding as a discipline, rather than an art, was not something we saw. The second edition of Practical Unix and Internet Security did have a chapter on the topic, covering safe setuid and networking code. The reference list is short, and of the 19 links there, only Ross Anderson’s papers are still available. Gary McGraw and Ed Felten’s Java Security was in production and was released a few months later. Lincoln Stein’s book on web security with its chapter on safe common gateway interface programming was two years away. The Open Web Application Security Project (OWASP) was still five years in the future. Competition in commercial tooling by companies like Coverity, Ounce Labs, and Fortify started around 2003, and didn’t really grow until about 2008 or 2010. There was no memory safety in mainstream languages or programming frameworks.

The Present

Looking to today, tools have made tremendous progress. There are safe, usable ways to copy strings in every language! Less sarcastically, the importance of language and library design to security has permeated well beyond security to language designers, and new languages are generally designed with at least memory safety in mind. Rust, Go, and others made security a selling point. Static, dynamic, and runtime analysis are merging. Security as code, immutable builds, apparmor, and other modern replacements for chroot defend better and more flexibly. There’s applied use of formal logic (TLA+) in the real world.

Attacks have also progressed in ways we never considered: heap sprays, double free, SQL injection, XSS, header smuggling, and the list goes on. Obviously, we find new vulnerabilities, but less often we find entire new types, like SQL injection (1998). There are also new vulnerability species, classes, and phylla. We can think of injection attacks as a class, containing perhaps a species like SQL injection or cross-site scripting. The discovery of new classes is rare and difficult, even when we were pointed in the right direction. For example, I know I spent a few days in 1997 trying to figure out if freeing memory a second time could have bad effects, and I failed to see how to exploit double frees. I hope I remembered to say to zero out the memory but didn’t consider that a compiler might optimize that away. I don’t think the compilers of the day were that smart. I hope the compilers weren’t that smart, but hope is no replacement for reading the machine code.

We have tools that we hadn’t thought of in 1996. Structured threat modeling has transformed. We have a threat modeling manifesto,12 structures for doing the work,and a plethora of new tooling to help. Heck, we have structures and tooling and even games! Compilers, operating systems, and runtimes have various randomizations like address space layout randomization, protection modes like write OR execute, and research programs like CHERI are improving hardware support for defense.10 We have automatic update in much of the code that matters and systems administrators who turn off those updates.

Many large organizations have made public commitments to security in development, starting with Microsoft. That was a heck of a shock at the time.6 There are many public replacements for my baby step of source code review: secure development lifecycles with training, tooling, and process. Organizations like SafeCODE and the National Institute of Standards and Technology are crafting guidance and standards for software security. The U.S. Food and Drug Administration is working on a new premarket guidance for cybersecurity. Automotive companies are working on a cross-industry standard. Lastly, models like the Jenga model allow us to prise apart the technical, interpersonal, and organizational elements.8 Those are a jumble in the original article.8

The Future

So, where are we going over the next 25 years? I can confidently predict that ... I’ll hate myself for writing this in a decade, never mind two.

First, let me make some predictions of what will remain. Most systems will still be running kernels written in C. From an AppSec perspective, such code is more vulnerable because it relies on programmers to get memory management right and other programmers to not improve their compilers to optimize away memory management intentions. A good deal of today’s kernel code will have survived the year 2038 date rollover and still be in use. It will have better defenses, but they will still be imperfect.10

Recall that the core kernels of today — Linux, Windows, and Mach — were all in use in 1996. If we want to plan to be off C in 2046, we need to get started. Furthermore, code written today for cars, water heaters, air conditioning systems, and other long-lived appliances will still be running and needing maintenance. The processors it runs on will have layers of virtualization and that virtualization and its optimizaions will still have side effects that attackers will exploit.

The code on top of that will be written in languages with new and fancier defenses. Memory corruption and temporal corruption will be tricky to achieve and their impact on the system will be limited, but their impact on applications will be large. We will see a rise in code constructed from components and those components will have better security properties when properly used. We already see code completion by artificial intelligence (AI), trained on code from GitHub. (See later in this article about machine learning on “real-world data,” and let the AI make the joke about “automating the work of copying and pasting from Stack Exchange.”) We will see more attacks on the seams between these systems and differences in the way they parse edge cases. Some of these will be discovered by automated analytic techniques.

These larger systems will be better isolated, much like smartphones and web browsers are better isolated than traditional desktop operating systems. Such isolation will lead to a rise in horizontal “expansion of authority” attacks to complement “elevation of privilege.”5

Machine learning systems will be explainable and those explanations will be frequently transparently trotted out. They will remain vulnerable to adversarial examples, especially the systems that learn from “real-world data” or, as I like to call it, data that’s been carefully crafted by your attackers. And because “fair” has many mutually exclusive definitions, we will have given up on the idea that these systems can be made “fair” (see Friedler et al. for the elegant argument1).

Attacks on the neural interfaces — well don’t even think about ransomware, because the ransomware won’t let you. Attacks will grow dramatically at the business layer. Ransomware will change the software that issues bids for your business, add misspellings to your dictionary, and send micropayments to an extra editing service to fix them. Attacks will be at the human layer: trolling, dog-piling, theft, and release of intimate data. We see stalker ware today and the start of threat modeling to address them.9

Attacks will find refuge and solace where it’s profitable to let them. Companies are driven by profit and will make product tradeoffs with security and privacy implications. For example, social media is driven by engagement and people saying outrageous things drives engagement. So, the social media platforms will let it continue until the costs to them out weigh the benefits. We might also see an increase in user agents, such as browsers or apps, that act as user agents to rebuild the user experience. Another example might be paid outreach via LinkedIn. Not a single spam message I’ve reported, offering me leads or cheap outsource development, has been seen as spam when it’s sent by a subscriber to their paid “inmail” product. These patterns will appear over and over, because our mechanisms to address costs to society are breaking down, and here, we start to leave the realm of AppSec.

Or, we start to leave the realm of AppSec as it exists at this moment in time. Even today, we see regulations like the General Data Protection Regulation requiring privacy: gathering and tracking of expressions of intent, tracing data, and ensuring that it’s used in accordance with rules. Perhaps the cost of engagement tools will change because of regulation and several leading universities have recently started technology policy shops.

We see Apple requiring privacy labels on apps. These aspects are grafted on today — a bit like code review was grafted on 25 years ago. We can expect that “we had a mature threat analysis approach” will sound better to regulators than “a lawyer interviewed the team once a year.” But it has also been 20 years since Lessig said “code is law ”2 and for good and bad, we’ll see more algorithmic implementation of laws, and algorithmic attacks on the inconsistencies between such things.

Tools are going to get more capable. They will replace our artisanal coding activity, and our artisanal analysis techniques. But AI will still be 25 years from capturing the breadth of human emotion and expectations and the security issues that emerge in those gaps will require smart humans to predict and manage.

Looking more broadly across security, the era of sweeping security issues under the rug has already ended. Around the world, data breach notifications must be sent to regulators and customers. Those rules are getting tighter: the list of incident types that must be reported is growing, the timelines for reporting is shrinking, and the detail that must be revealed is expanding. Reports like the Verizon DBIR or Cyentia’s IRIS series are setting new standards for understanding the causes and impacts of breaches. We will see a dramatic expansion of how we learn from incidents and that will inform what defenses we develop and deploy.3


Acknowledgments

This retrospective would have been impossible without Steve MacLellan of Fidelity giving approval to share the code review guidelines, and I want to also thank the entire team I worked with there. More important than giving me the chance to work on AppSec or release the material, they were really quite ahead of their time in doing this work. I’m grateful to Steve Lipner and Mary Ellen Zurko for comments on drafts of this article.

References
  1. S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im)possibility of fairness: Different value systems require different mechanisms for fair decision making,” Commun. ACM, vol. 64, no. 4, pp. 136–143, Apr. 2021, doi: 10.1145/ 3433949.
  2. L. Lawrence, Code: And Other Laws of Cyberspace. New York: Basic Books, 1999.
  3. R. Knake, A. Shostack, and T. Wheeler, “Learning from cyber incidents: Adapting aviation safety models to cybersecurity,” Belfer Center for Science and International Affairs, Harvard Kennedy School, Nov. 12, 2021.
  4. S. Lipner, T. Jaeger, and M. E. Zurko, “Lessons from VAX/SVS for high-assurance VM systems,” IEEE Security Privacy, vol. 10, no. 6, pp. 26–35, 2012.
  5. M. Miller, Robust Composition: Towards a Unified Approach to Access Control and Concurrency Control 2006. Baltimore, MD: Johns Hopkins, 2006, p. 302.
  6. B. Schneier and A. Shostack, Results, Not Resolutions. Security Focus, 2001.
  7. A. Shostack, “Source code review guidelines,” Shostack, Aug. 1996. Available: https://shostack.org/files/essays/review
  8. A. Shostack, “The Jenga model of threat modeling,” Shostack, Corp. White Paper, 2020. [Online]. Available: https://shostack.org/files/papers/The_Jenga_View_of_Threat_Modeling.pdf
  9. J. Slupska and L. M. Tanczer, “Threat modeling intimate partner violence: Tech abuse as a cybersecurity challenge in the Internet of Things,” in The Emerald International Handbook of Technology Facilitated Violence and Abuse. Bingley, U.K.: Emerald Publishing Limited, 2021.
  10. R. N. M. Watson, B. Laurie, and A. Richardson, “Assessing the viability of an open-source CHERI desktop software ecosystem,” Capabilities Ltd, Sep. 17, 2021. [Online]. Available: https://www.capabilitieslimited.co.uk/pdfs/20210917-capltd-cheri-desktop-report-version1-FINAL.pdf and summary at https://www.lightbluetouchpaper.org/2021/11/11/report-assessing-the-viability-of-an-open-source-cheri-desktop-software-ecosystem/
  11. C. Newcombe et al., “How Amazon web wervices uses formal methods,” Commun. ACM, vol. 58, no. 4, pp. 66–73. doi: 10.1145/2699417.
  12. Z. Braiterman et al., “The Threat Modeling Manifesto,” 2020. [Online]. Available: https://www.threatmodelingmanifesto.org