Amazon's 'Alexa Built-in' Threat Model
Exploring supply chain threat modeling with AlexaAmazon has released a set of documents, "Updates to Device Security Requirements for Alexa Built-in Products." I want to look at these as a specific way to express a threat model, which is threat modeling along the supply chain, talk about the proliferation of this different kind of model, and what it means for engineering. (More precisely, since I don't have an Amazon developer account, I'm going to look at the blog post, and infer some stuff about the underlying documentation.)
Alexa Built-In is a relatively new space for Amazon: offering APIs and a platform for consumers to access via someone else's devices. And Amazon is clearly thinking about what it means for their Alexa service to be accessed via, say, Sonos speakers, and they call out a set of seven required capabilities:
- Secure Boot
- Secure Key Storage
- Hardware-Based Cryptographic Engines
- Up-to-Date and Operating Systems with Long-Term Support (LTS)
- Host Hardening
- Separation of Account Privileges
- Threat Surface Reduction
What they're saying is "we've thought about a set of threats, including someone replacing your boot code or stealing your keys, and we need you to act on those." The list includes a set of answers to 'what can go wrong,' and tells you what you need to do about it. For example, the full line reads "Secure Boot can be used to reduce the risk that a hacker can tamper with and gain a persistent foothold on their device."
We can frame that as a threat model without straining:
- What are you* working on? Alexa Built-in devices.
- What can go wrong?? A hacker can tamper with and gain a persistent foothold.
- What are you going to do about it? Secure boot
- Did you do a good job? "We require device makers to submit a security assessment report before launch..."
(*I'm swapping the form of the four questions from "we" to "you", which has all sorts of consequences I'm going to ignore for this post.)
If our analysis is more structured than brainstorming, then there's value in having skilled engineers analyze an idealized version of a system. They can catalog the threats that impact a the high-level design. (If we're just brainstorming, then it's hard to know if the analysis is worthwhile.). If they publish their high level design, then I can compare my high level design to theirs, and if they match, expect that my design inherits those threats. Better yet, here Amazon has said what they expect to be done about each.
But these lists of what you should do are not unique to Amazon. There's a tremendous amount of guidance for IoT makers, and the lists are not well aligned. For example, let's compare to the UK's "Code of Practice for consumer IoT security." That has 13 guidelines. UK #3 roughly matches to Alexa's #4, and at first blush, number 4, 6, and 7 correspond directly. Nine UK guidelines, and three Alexa guidelines don't obviously line up. So someone making an Alexa device for sale in the UK has to deal with roughly 17 guidelines. The UK Code of Practice lists:
- No default passwords
- Implement a vulnerability disclosure policy
- Keep software updated (~A4)
- Securely store credentials and security-sensitive data (=A2)
- Communicate securely
- Minimise exposed attack surfaces (=A7)
- Ensure software integrity (=A1)
- Ensure that personal data is protected
- Make systems resilient to outages
- Monitor system telemetry data
- Make it easy for consumers to delete personal data
- Make installation and maintenance of devices easy
- Validate input data
Of course, there are not the only two sets of rules. Underwriter's Labs has the 2900 series for Cybersecurity Assurance, DHS has IOT Security Guidance, The FDA has draft premarket guidance for cybersecurity, which, importantly, require updatability, not on Amazon's Alexa requirements or the NCSC list. Other lists, such as Amazon AWS's list, are also different. ("Ten security golden rules for IoT solutions.")
The differences in "what to do" indicate differences of one or more of implied architecture, analytic technique, and mitigative action. It would be helpful to both device makers and those creating new regulations if the threat model work product were more concretely revealed. (That is, what do you think these devices look like? What analysis techniques did you use?)
Some of these differences in the lists may reflect power differences: Amazon can say that you must do these things to be Alexa powered. The FDA can say 'you must do these things to sell your device, and perhaps the UK has a harder time demanding that devices meet its code of practice.
Our security engineering practices are just not that mature yet, and so some of this diversity may result in better security. Other parts of the diversity just add work. At each device maker, someone has to assess the requirements, find the commonalities, and decide what to do. (There may be a mapping document, but I was unable to find it.)
So with that, let me compare briefly to the BIML Risk Analysis, which I talked about last week. That document shows its work much more deeply, and the application of that thinking is harder to see. There's a real tension in how to balance between these, and I hope we see more documents that help us see what our choices look like. (Nominations welcome!)