CSRB Report on Microsoft
The CSRB has released its report into an intrusion at Microsoft, and...it’s a doozy.The Cyber Safety Review Board has released its report into an intrusion at Microsoft, and ... it’s a doozy. It opens: “The Board concludes that this intrusion should never have happened. Storm-0558 was able to succeed because of a cascade of security failures at Microsoft.”
With some time to reflect on the findings, I think the report is best characterized as a well-earned rebuke to Microsoft. It’s tempting to use terms like “blistering,” and I think those are unfair. The board has clearly given Microsoft the benefit of the doubt, pushed them to correct visible failings like the blog post, and (reading between the lines) pushed them to reach a conclusion about the breakin. Regardless, the facts are now submitted to a quite partial world, who will draw conclusions.
It’s worth emphasizing that the board did not reach these conclusions from “an ivory tower.” They spoke with other leading cloud service providers, and recommend specific practices, implemented by other providers that would have impacted on the attack.
This post is split into sections of the board’s statements on Microsoft’s actions, information about other defenders, and comments on report content, including some choices about transparency. Quotes from the report are in “quotes” while [my commentary is bracketed].
Microsoft’s actions
- ”Microsoft’s decision not to correct, in a timely
manner, its inaccurate public statements about this incident,
including a corporate statement that Microsoft believed it had
determined the likely root cause of the intrusion when in fact, it
still has not; even though Microsoft acknowledged to the Board in
November 2023 that its September 6, 2023 blog post about the root
cause was inaccurate, it did not update that post until March 12,
2024, as the Board was concluding its review and only after the
Board’s repeated questioning about Microsoft’s plans to issue a
correction” (page 17)
[Nice of the board to push rather than say "still not corrected," and probably wise as they build their culture and MO.]
- ”Microsoft leadership should consider directing
internal Microsoft teams to deprioritize feature developments across
the company’s cloud infrastructure and product suite until substantial
security improvements have been made in order to preclude competition
for resources.” (page 19)
[Last time someone said this, it was Bill Gates. 😇]
- [In many ways Microsoft’s internal processes ran well - the response and escalation notes on pages 1 and 2 of the report are welcome examples of the fairness the board is developing.]
- ”As a result, Microsoft developed 46 hypotheses to investigate,
including some scenarios as wide-ranging as the adversary possessing a
theoretical quantum computing capability to break public-key
cryptography or an insider who stole the key during its
creation. Microsoft then assigned teams for each hypothesis to try to:
prove how the theft occurred; prove it could no longer occur in the
same way now; and to prove Microsoft would detect it if it happened
today. Nine months after the discovery of the intrusion, Microsoft
says that its investigation into these hypotheses remains ongoing.”
[Running 46 parallel teams is an interesting approach, with some risks of silos. It’s not something I’ve seen in other incident responses. It both speaks to the seriousness of the response process, and raises a question of “was this the most effective way to go?”]
- [It took until June 26 to launch a SSIRP (pronounced “surp”) which I found very surprising, but it makes sense in light of the June 24 invalidation of the MSA key, and then seeing that the key was still issuing tokens.]
Other defenders
- ”The next day, State observed multiple security alerts from a
custom rule it had created, known internally as “Big Yellow Taxi,”
that analyzes data from a log known as MailItemsAccessed, which tracks
access to Microsoft Exchange Online mailboxes. State was able to
access the MailItemsAccessed log to set up these particular Big Yellow
Taxi alerts because it had purchased Microsoft’s government
agency-focused G5 license that includes enhanced logging capabilities
through a product called Microsoft Purview Audit (Premium). The
MailItemsAccessed log was not accessible without that “premium”
service.”
[This sort of very actionable what we get from a control is a crucial value of a report like this.]
- Recommendation 17: “The Board believes that incorporating all
known vulnerabilities across the entire technology stack in CVE’s
comprehensive repository would be a public benefit for industry and
government customers, as well as security researchers.”
[CVE didn’t track unique vulns in specific cloud products for capacity reasons, and I think in light of the recent issues at NVD, that’s probably still a good choice. But: I like that the Board has borrowed NTSB practice of not assessing the cost or downsides of their recommendations, and simply makes the recommendation, which MITRE and other parts of DHS can evaluate. I hope they institutionalize this practice.]
- ”Cloud Service Provider Cybersecurity Practices: Cloud
service providers should implement modern control mechanisms and
baseline practices, informed by a rigorous threat model, across their
digital identity and credential systems to substantially reduce the
risk of system-level compromise.” (Page iv)
[Happy to help all y'all!]
Report Content
- ”Microsoft also said that Storm-0558 had, in the past, used more
sophisticated covert networks, but Microsoft believes that a previous
disruption of the threat actor’s infrastructure forced it to use a
less sophisticated infrastructure for this intrusion that was more
readily identifiable once discovered.”
[I’d really like the report to say more about what exactly is meant by ‘less sophisticated’ and ‘more readily identifiable’, even if it gives up a little defender tradecraft.]
- “The flaw was caused by Microsoft’s efforts to address customer
requests for a common OpenID Connect (OIDC) endpoint service that
listed active signing keys for both enterprise and consumer identity
systems.”
[I don’t know enough about OIDC endpoints to understand this, and wish the board had explained it more deeply. In particular, it seems like that would make it harder to configure my open ID instance for my enterprise to be isolated? My suspicion is amplified by Figure 1 saying “The various authentication libraries use the metadata endpoint to determine valid signing keys.” ]
- [The victim notification email in Figure 2 is anodyne and wordy, and we’re well into the second paragraph before we get to ‘government-backed actors.’ I wish the board had commented more, and this should be usability tested.]
Transparency
I have a great deal of respect for the members of the board, big dreams about what the board can achieve, and have worked to keep my critiques private and rare. (If they don’t seem rare, please believe me when I say I delete more emails than I send.) Here I want to comment in public. There’s many instances where the board should have been more forthcoming. I understand the board’s practices are still evolving, and they’re working to thread the needles of both not revealing things to attackers and victim sensitivity and cooperation. And less anonymization will be a fine norm, and increase the board’s transparency and thus trust.
- [There are strange privacy choices. For example the source of this sentence is anonymized in footnote 6: “and additional individuals across 22 organizations.” I’ve written at length about investigatory bodies building credibility through transparency. The example I use in CSRB Senate Hearing is “It would be fine to say “The NSA informed us that a highly capable foreign power did this, and we relied on that information as we made these following assessments.””]
- ”Google’s Threat Analysis Group was able to link at least one entity tied
to this threat actor to the group responsible for the 2009 compromise
of Google and dozens of other private companies in a campaign
known as Operation Aurora, as well as the RSA SecurID
incident.(57)”
[Footnote 57 says “anonymized,” yet the start of the sentence identifies that Google TAG did this.]
- [Footnote 155 to “Microsoft stated that it had notified all impacted customers and launched an investigation,” is yet another ‘anonymized’ one, as is 158, not attributing “Researchers in the security community scrutinized .. Microsoft’s second blog, and identified gaps and inconsistencies...” Why can’t the board identify the researchers?
Recommendation 1: The Board should add an internal step of reviewing these anonymizations as the report is readied for release.
Recommendation 2: The Board should create a transparent system for justifying anonymizations and redactions. The one used for FOIA redactions could be a good starting point.
Conclusions
More broadly, the Board has shifted from what we can now see as “warmups” (their log4j and Lapsus$ reports) to single-incident reports. This is a very important element of how the NTSB operates, and I look forward to more.