Shostack + Friends Blog Archive

 

Diginotar Quantitative Analysis ("Black Tulip")

Following the Diginotar breach, FOX-IT has released analysis [link to http://www.rijksoverheid.nl/bestanden/documenten-en-publicaties/rapporten/2011/09/05/diginotar-public-report-version-1/rapport-fox-it-operation-black-tulip-v1-0.pdf no longer works] and a nifty video showing OCSP requests.

As a result, lots of people are quoting a number of “300,000”.

Cem Paya has a good analysis of what the OCSP numbers mean, what biases might be introduced at “DigiNotar: surveying the damage with OCSP.”

To their credit, FoxIt tried to investigate the extent of the damage by monitoring OCSP logs for users checking on the status of the forged Google certificate. There is a neat YouTube video showing the geographic distribution of locations around the world over time. Unfortunately while this half-baked attempt at forensics makes for great visualization, it presents a very limited picture of impacted users.

Digitar and Fox-IT released enough that a dedicated secondary analyst like Cem can see methodological flaws in what they did. What else could we learn if we had more of the raw observations? When I read the report, I noticed the claim “A number of malicious/hacker software tools was found. These vary from commonly used tools such a the famous Cain & Abel tool to tailor made software.” This claim mixes analysis and observation. The observation is that there was software with which the analyst was not familiar. It may be that it was a perl script or other code that can be easily skimmed to see that it was “tailor made.” It may be that it was just something re-compiled to not match a hash. We don’t know. Similarly, the report claims (4.1) “In at least one script, fingerprints from the hacker are left on purpose, which were also found in the Comodo breach investigation of March 2011.” Really? On purpose? Perhaps the fingerprints were inserted as a matter of dis-information. Perhaps the Fox-IT analyst called the intruder on the phone, and he owned up to it. We don’t know.

I want to be clear that I don’t mean to be picking on Fox-IT here. My understanding is that the report they prepped came out incredibly quickly, and kudos to them for that. I’ve cherry picked two areas where I can ask for better editing, but I’m very aware that that editing comes at a cost in timeliness.

Cem’s article is very much worth reading, as is the Fox-IT report. But Cem’s analysis helps illustrate a theme of the New School, which is that we need diverse perspectives and analysis brought to bear on each report. The more data we see, the more we can learn from it. No single analysis will tell us everything we might learn. (I made a similar point here.)

I am left with a question for Cem, which I would have added to his post, but couldn’t comment there. My question is, having given all that thought to all the biases, what do you think is the probably true number (or range) of affected people?