Threat Model Thursday: Chromium Post-Spectre[no description provided]
Today's Threat Model Thursday is a look at "Post-Spectre Threat Model Re-Think," from a dozen or so folks at Google. As always, I'm looking at this from a perspective of what can we learn and to encourage dialogue around what makes for a good threat model.
What are we working on?
From the title, I'd assume Chromium, but there's a fascinating comment in the introduction that this is wider: "any software that both (a) runs (native or interpreted) code from more than one source; and (b) attempts to create a security boundary inside a single address space, is potentially affected." This is important, and in fact, why I decided to hightlight the model. The intro also states, "we needed to re-think our threat model and defenses for Chrome renderer processes." In the problem statement, they mention that there are other, out of scope variants such as "a renderer reading the browser’s memory."
It would be helpful to me, and probably others, to diagram this, both for the Chrome case (the relationship between browser and renderer) and the broader case of that other software, because the modern web browser is a complex beast. As James Mickens says:
What can go wrong
There is a detailed set of ways that confidentiality breaks current boundaries. Most surprising to me is the claim that clock jitter is not as useful as we'd expect, and even enumerating all the clocks is tricky! (Webkit seems to have a different perspective, that reducing timer precision is meaningful.)
There is also an issue of when to pass autofilled data to a renderer, and a goal of "Ensure User Intent When Sending Data To A Renderer." This is good, but usability may depend on normal people understanding that their renderer and browser are different. That's mitigated by taking user gestures as evidence of intent. That seems like a decent balancing of usability and security, but as I watch people using devices, I see a lot of gesturing to explore and discover the rapidly changing meanings of gestures, both within applications and across different applications and passwords.
What are we going to do about it?
As a non-expert in browser design, I'm not going to attempt to restate the mitigations. Each of the defensive approaches is presented with clear discussion of its limitations and the current intent. This is both great to see, and hard to follow for those not deep in browser design. That form of writing is probably appropriate, because otherwise the meaning gets lost in verbosity that's not useful to the people most impacted. I would like to see more out-linking as an aide to those trying to follow along.
Did we do a good job?
I'm very glad to see Google sharing this because we can see inside the planning of the architects, the known limits, and the demands on the supply chain (changes to compilers to reduce gadgets, changes to platforms to increase inter-process isolation), and in the end, "we now assume any active code can read any data in the same address space. The plan going forward must be to keep sensitive cross-origin data out of address spaces that run untrustworthy code." Again, that's more than just browsers. If your defensive approaches, mitigations or similar sections are this clear, you're doing a good job.