Shostack + Friends Blog

 

Sutter on Safety

What do we need to assess if memory safe langages are 'sufficient'? An AI image of A person typing at a computer in a text on screen, but the words are changed on the monitor of a different person's screen

Herb Sutter has a long article, Safety in Context, in which he presents a case that memory safe languages (MSL) are too expensive, and that “a 10-50x improvement (90-98% reduction) is sufficient.” I started to write this up for a March appsec roundup, and had too much to say.

He’s explicitly making an economic argument and uses terms like “sufficient,” but he’s not explicit about why that’s sufficient or who benefits from that argument. Herb works for Microsoft (and has a disclaimer that he’s not presenting their opinion). I don’t recall meeting him, he seems like a smart person, and his perspective is very good for Microsoft and their shareholders. I don’t say that to cast aspersions, but because words like “sufficient” are doing a lot of work that needs to be made explicit.

First, we agree on several things, including that “it’s too easy to write security vulns that can automatically be caught,” and that moving to MSL will not solve all our problems. In fact, a great many of the problems that threat modeling helps us find, such as authentication or business logic flaws will get more important and visible as memory safety vulns decline, for whatever reason.

But then he says “In those four buckets, a 10-50x improvement (90-98% reduction) is sufficient.”

He writes “If we can get a 98% improvement and still have fully compatible interop with existing C++, that would be a holy grail worth serious investment.” Let me re-write that: “holy grail worth serious investment by Microsoft.” Now we fully agree. And here’s the rub at the heart of the last section: Microsoft has not successfully made that investment.

What are we measuring?

“An oft-quoted number is that “70%” of programming language-caused CVEs (reported security vulnerabilities) in C and C++ code are due to language safety problems. That number is true and repeatable, but has been badly misinterpreted in the press: No security expert I know believes that if we could wave a magic wand and instantly transform all the world’s code to MSLs, that we’d have 70% fewer CVEs, data breaches, and ransomware attacks.”
He’s right, that number is oft-quoted, I quoted it in Threats. I do think that magic wand would dramatically reduce CVEs for a few years, but probably not by 70%. And I do have to point out that while we don't have a magic wand, Magic Security Dust™ can solve this.

He goes on to talk about attacker behavior, and I’ll come back to that. Before I do, the number of CVEs associated with string management has fallen dramatically over the last 20 years. Decent string copy and concatenation is now easier than not. Type/data confusion including SQL injection and XSS seem to be dropping when expressed as a fraction of CVEs. That’s an impression, and may be that the numbers are the same, but other vuln types are growing, or they may be falling in both absolute and relative terms.

Further, we created CVE to track vulns, not intrusions, and a small fraction of CVEs seem to be involved in a large fraction of attacks. But every “CVE-worthy” vuln has a patch associated with it that needs to be tracked and managed. Reducing the number is a worthwhile goal.

Qui bono?

The argument the White House is putting forward on memory safety, and that CISA is making about secure by design are arguments about economics. The essential argument is that the software industry as a whole, including Microsoft, are shipping products with a huge cost of ownership and there’s relatively little choice about buying those products. The software companies make huge profits while imposing massive costs on their customers, and there’s dysfunction in the market because claims that software is lower maintenance costs are hard to assess, and rarely incorporated into buyer decision models.

To put some numbers on it, last year, Microsoft paid 38.8 billion dollars in dividends. (I appreciate the small fraction that came to me.) According to Microsoft, they “invest about $1 billion in cloud security each year.” (The “By the numbers” images, image 3, so click > twice.)

Above, I wrote “Reducing the number [of CVE-worthy vulns] is a worthwhile goal.” Now that we’re in the economics part of the post, let me add that the reason the crisis at NVD is such a big deal is that the US government has stopped spending money on a service that helped people with that problem.

Herb writes:

“Any code change to conform to safety rules carries a cost; worse, not all code can be easily updated to conform to safety rules (e.g., it’s old and not understood, it belongs to a third party that won’t allow updates, it belongs to a shared project that won’t take upstream changes and can’t easily be forked).”
These are all true, but only the first is true in the sense that F=MA. We (as societies or as companies) can invest in understanding or replacing that old code. We (as a society) can require that third party to allow updates, or to allow a fork of their code to be updated for modern use. The second two are matters of contract, and we should be very, very cautious about the government changing the rules which underly contract and recognize that attempts to do so have winners and losers who will spend massively to influence where a thumb is placed on scale.

A data framework: Cyber Public Health

We agree that CVEs are a poor measure, and it turns out that much of my career has been involved in asking questions of how we prioritize, including lesson-learning and cyber public health. The question of ‘what is sufficient’ is one that a public health approach could help us answer. At what rate are computers getting sick or dying? Are the rates different based on Microsoft or other provider software? (That would be an argument that vulns are more important than social engineering, which is an argument that a lot of people have a lot of opinions about, and very few people bring data that’s oriented around ‘first gather data on problems.’)

Which vulnerabilities matter? Are those rising or falling? What investments or interventions change the numbers?

Sadly, today, we don’t have such numbers, but if we did, the question that Herb is bringing up could be answered much more easily.

Postscript: After I wrote this, John Viega went deep into the question of memory safety in C++ from a technical perspective in C isn’t a Hangover; Rust isn’t a Hangover Cure. He argues many things, but one that sticks out is that memory safety is less important than a good core library and dependency managers.

Disclaimer: Microsoft’s investments in security included my salary for a decade, and letting me publish a book on threat modeling. I remain a shareholder, and so some of their dividends have come to me. Image by Midjourney: “A person typing at a computer in a text on screen, but the words are changed on the monitor of a different person's screen.”