Bitlocker, the FBI, and Risk
What can the Bitlocker story tell us about risk?
There’s a story out, Microsoft gave FBI a set of BitLocker encryption keys to unlock suspects’ laptops.
Security is classically described as ensuring confidentiality, integrity, and availability. These are often portrayed as a triangle, with the properties in tension, and this story illustrates why. When you encrypt data, you strengthen confidentiality, and if you lose your key, the data is no longer available.
While I was at Microsoft, I had a lot of conversations about this tradeoff, fearing outcomes like these. I often heard the argument that the risk of data loss as people forget their key, pass away without sharing it, or otherwise lose access to their data was bigger than the risk of warrants. To be frank, I never had a great counter-argument. We had plenty of data on support calls for lost keys, plenty of enterprise customer conversations about key loss and helpdesks, and few lawful access cases, especially as Bitlocker rolled out.
The argument that key backup should not have been automatic is stronger, but runs into usability complexity. (Do regular people understand what encryption is? What a key is? What UI options they clicked a few minutes ago?) All of this is happening as part of a very rapid transition in how Microsoft wants you to think about your computer, access to it via Microsoft Accounts, to your data via OneDrive, and other elements of changing security boundaries.
I want to talk about this relative to the word “risk,” which we use pretty naturally here. The first thing to ask is risk to whom? Data loss was Microsoft’s “fault,” and we listened to people who’d lost their digital lives. Everyone had stories of meeting people who were justifiably distraught after a failure of some sort. Many of the stories I heard of people losing access to their PCs were really heart-wrenching, making the risk of data loss both available and salient.
The second thing to note is that, in a functioning liberal democracy, the police investigate criminals, and have to go through a process to meet rules like “No warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.” And in a functioning democracy, we expect that there are checks and balances around such uses of government power. It’s easy to discount the rights of suspects or criminals, or to minimize your obligation to protect them like other customers. This ties into ‘who bears the risk?’
The third thing to note is that was easier, for most Microsoft employees, to use a mental model of the police as friendly Redmond police than a secret police force or goons busting down doors.
All of this leads to a mental model where the risk to Microsoft takes precedence over the risk to people trying to keep their data secret on their PCs.
If you’re paying very close attention, you’ll have noticed that I did not use a single number in this discussion of risk. Had I included numbers, especially for the police risks, they would have been magnets for debate.
What I hope you take away from this is first, this is not a flaw in Bitlocker, but rather an unavoidable security-security tradeoff between availability and confidentiality. The second thing is: this is not about “Microsoft good” or “Microsoft bad” but that risk management processes need to be explicit about where the risk falls, that our perceptions often play a role, and that standards that obligate a company to “make risk decisions” may not have the results that their authors want if those standards don’t specify risk to whom.
To be clear, I've been gone from Microsoft for a long time, and do not speak for them.