Shostack + Friends Blog

 

Secure Boot and Liability

Secure boot presents questions that should inform the liability conversation A very sad robot

Recently I wrote about some technical aspects of Secure Boot and Secure by Design. In that post, I promised to talk about the market issues, which is that the 100+ signers of the pledge have now made public commitments to be ‘secure by design,’ and they now have to live up to that commitment, for a couple of reasons, including the FTC, and maybe at some point, the Federal acquisition rules.. or more. And by more, I meant lawsuits and liability. I didn’t mean Secure boot-neutering PKFail is more prevalent than anyone knew, but Dan Goodin documents that it impacts “medical devices, desktops, laptops, gaming consoles, enterprise servers, ATMs, POS terminals, and some weird places like voting machines.”

This post is far more speculative than the technical one. It’s going to start with today and then get increasingly speculative.

So first, a public commitment is a public commitment, and one that a customer might rely on. Failing to do so might be a deceptive trade practice. As I read the Solarwinds complaints that have survived the motion to dismiss, the issue is that the company (and Mr. Brown) said things about their product security, knowing they weren’t true. That’s different from saying we’re going to do things (within a year), but those companies that signed a pledge are now on a clock. Will the government slap them? “Teh government” is a big set of organizations, sometimes actively at odds with itself. (See for example, Air Force vs EPA. My tax dollars, hard at work!) So it’s not inconceivable that CISA’s encouragement could be at odds with FTC or SEC enforcement goals. There’s also the Federal acquisition rules, with the secure software development attestation rules. The CISA page says “The release of the secure software development attestation form reinforces secure by design principles advanced by CISA...” I don’t know anyone who thinks we’ve heard the final word from them.

There’s a very fundamental choice that companies are making right now, which is how to engage with these changes. Many companies seem to not be engaging at all. Others are exploring how they can update their development practices, with a focus on the demands of emergent regulation. And a third group is looking to how they can shift practices to focus on new definitions of fitness for purpose. This group is betting that shifting requirements will be existential for some of their competitors, and want to be ready. The most ready example of a company in existential risk is probably Crowdstrike. Lawsuits are here. It may be expensive to fix that sensor rules are processed in-kernel.

According to The Verge, an upcoming “Endpoint Security Ecosystem Summit” in Redmond will include “government representatives [...] ‘to ensure the highest level of transparency to the community’s collaboration to deliver more secure and reliable technology for all.’“ One way to read that is Microsoft will throw Crowdstrike to the government wolves to save Windows. (The article also mentions that Microsoft tried to fix this during Vista. The amazing security work Jim Alchin led has been unfairly overshadowed by his fondness for UAC and a failure to get the ecosystem to ship drivers.)

If your systems are expensive to fix, being able to say “we started on this several years ago” may be a worthwhile bet to start making. It may convince regulators or juries of your goodwill.

Shifting back to the overarching theme of “more,” what I really mean is liability. It’s hard to imagine a liability regime which doesn’t impose penalties for using the thing labeled “do not use.” That may not require a new law, it may be something that a local judge decides.

To put a spiky point on it, if using a component labeled “Do not trust,” doesn’t qualify you for liability, what does?