Shostack + Friends Blog Archive

 

15 Years of Software Security: Looking Back and Looking Forward

Fifteen years ago, I posted a copy of “Source Code Review Guidelines” to the web. I’d created them for a large bank, because at the time, there was no single document on writing or reviewing for security that was broadly available. (This was a about four years before Michael Howard and Dave LeBlanc published Writing Secure Code or Gary McGraw and John Viega published Building Secure Software.)

So I assembled what we knew, and shared it to get feedback and help others. In looking back, the document describes what we can now recognize as an early approach to security development lifecycles, covering design, development, testing and deployment. It even contains a link to the first paper on fuzzing!

Over the past fifteen years, I’ve been involved in software security as a consultant, as the CTO of Reflective, a startup that delivered software security as a service, and as a member of Microsoft’s Security Development Lifecycle team where I focused on improving the way people threat model. I’m now working on usable security and how we integrate it into large-scale software development.

So after 15 years, I wanted to look forward a little at what we’ve learned and deployed, and what the next 15 years might bring. I should be clear that (as always) these are my personal opinions, not those of my employer.

Looking Back

Filling the Buffer for Fun and Profit
I released my guidelines 4 days before Phrack 49 [link to http://phrack.org/issues.html?issue=49 no longer works] came out with a short article called “Smashing The Stack For Fun And Profit.” [link to http://phrack.org/issues.html?issue=49&id=14#article no longer works] Stack smashing wasn’t new. It had been described clearly in 1972 by John P. Anderson in the “Computer Security Technology Planning Study,” and publicly and dramatically demonstrated by the 1988 Morris Worm’s exploitation of fingerd. But Aleph1’s article made the technique accessible and understandable. The last 15 years have been dominated by important bugs which share the characteristics of being easily demonstrated as “undesired functionality” and being relatively easy to fix, as nothing should really depend on them.

The vuln and patch cycle
As a side effect of easily demonstrated memory corruption, we became accustomed to a cycle of proof-of-concept, sometimes a media cycle and usually a vendor response that fixed the issue. Early on, vendors ignored the bug reports or threatened vulnerability finders (who sometimes sounded like they were trying to blackmail vendors) and so we developed a culture of full disclosure, where researchers just made their discoveries available to the public. Some vendors set up processes for accepting security bug reports, with a few now offering money for such vulnerabilities, and we have a range of ways to describe various approaches to disclosure. Along the way, we created the CVE to help us talk about these vulnerabilities.

In some recent work, we discovered that the phrase “CVE-style vulnerability” was a clear descriptor that cut through a lot of discussion about what was meant by “vulnerability.” The need for terms to describe types of disclosure and vulnerabilities is an interesting window into how often we talk about it.

The industrialization of security
One effect of memory corruption vulnerabilities was that it was easy to see that the unspecified functionality were bugs. Those bugs were things that developers would fix. There’s a longstanding, formalist perspective that “A program that has not been specified cannot be
incorrect; it can only be surprising.” (“Proving a Computer System Secure“) That “formalist” perspective held us back from fixing a great many security issues. Sometimes the right behavior was hard to specify in advance. Good specifications are always tremendously expensive (although thats sometimes still cheaper than not having them.) When we started calling those things bugs, we started to fix them. And when we started to fix bugs, we got people interested in practical ways to reduce the number of those bugs. We had to organize our approaches, and discover which ones worked. Microsoft started sharing lots of its experience before I joined up, and that’s helped a great many organizations get started doing software security “at scale.”

Another aspect of the industrialization of security is the massive growth of security conferences. There are again, many types. There are hacker cons, there are vulnerability researcher cons, and there’s industry events like RSA. There’s also a rise in academic conferences. All of these (except BlackHat-style conferences) existed in 1996, but their growth has been spectacular.

Looking forward in software security

Memory corruption
The first thing that I expect will change is our focus on memory corruption vulnerabilities. We’re getting better at finding these early in the development with weakly typed languages, and better at building platforms with randomization built in to make the remainder harder to exploit. We’ll see a resurgence of command injection, design flaws and a set of things that I’m starting to think about as feature abuse. That includes things like Autorun, Javascript in PDFs (and heck, maybe Javascript in web pages), and also things like spam.

Human factors
Human factors in security will become even more obviously important, as more and more decisions will be required of the person because the computer just doesn’t know. Making good decisions is hard, and most of the the people we’ll ask to make decisions are not experts and reasonably prefer to just get their job done. We’re starting to see patterns like the “gold bars” and advice like “NEAT.” I expect we’ll learn a lot about how attacks work, how to build defenses, and coalesce around a set of reasonable expectations of someone using a computer. Those expectations will be slimmer than security experts will prefer, but good science and data will help make reasonable expectations clear.

Embedded systems
As software gets embedded in everything, so will flaws. Embedded systems will come with embedded flaws. The problems will hit not just Apple or Andriod, but cars, centrifuges, medical devices, and everything with code in it. Which will be a good approximation of everything. One thing we’ve seen is that applying modern vulnerability finding techniques to software released without any security testing is like kicking puppies. They’re just not ready for it. Actually, that’s a little unfair. It’s more like throwing puppies into a tank full of laser-equipped sharks. Most things will not have update mechanisms for a while, and when they do, updates will increasingly a battleground.

Patch Trouble
Apple already forces you to upgrade to the “latest and greatest,” and agree to the new EULA, before you get a security update. DRM schemes will block access to content if you haven’t updated. The pressure to accept updates will be intense. Consumer protection issues will start to come up, and things like the Etisalat update for Blackberry will become more common. These combined updates will impact on people’s willingness to accept updates and close windows of software vulnerability.

EULAs
EULA wars will heat up as bad guys get users to click through contracts forbidding them from removing the software. Those bad guys will include actual malware distributors, middle Eastern telecoms companies, and a lot of organizations that fall into a grey area.

Privacy
the interplay between privacy and security will get a lot more complex and nuanced as our public discourse gets less so. Our software will increasingly be able to extract all sorts of data but also to act on our behalfs in all sorts of ways. Compromised software will scribble racist comments on your Facebook wall, and companies like Social Intelligence will store those comments for you to justify ever-after.

Careers
Careers in software security will become increasingly diverse. It’s already possible to specialize in fuzzing, in static or dynamic analysis, in threat modeling, in security testing, training, etc. We’ll see lots more emerge over the next fifteen years.

Things we won’t see

We won’t see substantially better languages make the problem go away. We may move it around, and we may eliminate some of it, but PHP is the new C because it’s easy to write quick and dirty code. We’ll have cool new languages with nifty features, running on top of resilient platforms. Clever attackers will continue to find ways to make things behave unexpectedly.

A lack of interesting controversies.

(Not) looking into the abyss

There’s a couple of issues I’m not touching at all. They include cloud, because I don’t know what I want to say, and cyberwar, because I know what I don’t want to say. I expect both to be interesting.

Your thoughts?

So, that’s what I think I’ve seen, and that’s what I think I see coming. Did I miss important stories along the way? Are there important trends that will matter that I missed?

[Editor’s note: Updated to clarify the kicking puppy analogy.]

4 comments on "15 Years of Software Security: Looking Back and Looking Forward"

Comments are closed.