Teland and Wattal on Insecurity and Stock Price
At the Workshop on Information Security Economics, Rahul Telang and Sunil Wattal presented “Impact of Software Vulnerability Announcements on the Market Value of Software Vendors – an Empirical Investigation.” I’m pretty busy, so I’ll point to comments by Ed Moyle, and hefty analysis by Tom Ptacek [link to http://www.sockpuppet.org/tqbf/log/2005/06/1-short-csco-2-publish-ios-remotes-3.html no longer works].
[Private to DM: If I say its a workship, its a workship. To the oars! The bench that rows hardest gets tenure!]
Some of Ptacek’s criticisms have a prima facie plausibility (LSASS won’t kill you, but an SUV rollover due to tire failure might). However, analyses like Telang, et. al.’s have an important place as information security research continues to incorporate the methods used by more established disciplines, like economics, actuarial science, and biostatistics.
I had to leave before this particular paper was presented, so I cannot comment on it directly, however I think it is safe to say that the technical and economic “camps” have much to learn from one another. Sharp criticisms like Thomas’s have their place, but so too do papers like Telang’s. Personally, I think that releases of PII or financial data traceable to technical screw-ups are much more interesting to look at w.r.t. potential valuation impacts — if one can conclusively demonstrate a linkage there, then the technical stuff will tend to take care of itself since such releases will be shown to hit the bottom line.
Chris, bearing in mind that I WANT this paper to be correct, I think a complete response to your point might be, “wanting it to be true doesn’t make it true”.
I did see the paper, and I have read way too many event analysis papers. I like Thomas’ blog, but think that he is way off base here.
1) Thomas needs to understand that this methodology is, when well executed, one of the most common approaches to business research. His comments on market volatility and sample size simply aren’t effective criticisms. C’mon, that’s what regressions are designed to handle.
2) That said, event analysis is tricky, because it attempts to discretize the world. Nailing the time of disclosure is hard. Getting a good sample is hard. While they didn’t cast as wide a net as they could have, their collection method was designed to minimize bias.
3) The paper wasn’t trying to look at all security flaws. It was contributing to the ridiculously over-hashed vulnerability disclosure policy debate.
It was a small sample. You can argue over their model of harms (Equation (4) in the paper) You can wonder at the results. You can wonder about the benefits of using fixed-effects (a dummy for each firm) when MS is 46% of the data. But if you trust Telang not to lie, it’s a solid paper. It’s an accepted methodology by economists to explore how the market percieves corporate events.
I’m working on a larger database of events involving security crises in companies, their products or leaked data to run a similar model. If there are further criticisms, I am very open to this approach, but I think understanding market perceptions is very important.
I think its important while bridging to provide explanations of what might seem obvious to one immersed in a particular stream of research.
Perhaps another way to say that is, the fact that a methodology is accepted in field A is one thing, but when talking to experts in field B, your results are more likely to be accepted if you explain it, rather than simply assert it.
@Thomas:
I realize that if wishes were horses, I wouldn’t be looking for a new car. Nonetheless, I think that even if this paper *isn’t* true (that is, his methods are poor, his results not borne out by attempts at replication, etc.) that work like this advances the nascent discipline of looking at info sec issues in a more-than-purely-technical way. That is something we all undoubtedly favor, so I guess I am casting myself against type and accentuating the positive.
@Adam:
Not sure whose methodological assertions you’re highlighting here, but if you’re speaking to economists who proffer models w/out spelling out their epistemological assumptions, I drink to your idealism. Pragmatically, however, I would say that finding information about event analysis in a decent academic bookstore is a pretty trivial exercise — this is probably econometrics 201. If, OTOH, you’re saying that economic model-builders could benefit from some of the real-world and CS insights a guy like Thomas could offer, I agree wholeheartedly. This is why I am so in favor of continuing the dialogue.
[I followed up here] {http://www.sockpuppet.org/tqbf/log/2005/06/ketchup.html}. Thanks Allan and Chris. Thtbpbtpbpt, Adam.
Chris,
If an academic who is contributing to an interdisciplinary conference like WEIS, then those who explain their assumptions will help their readers understand their work. Work that is understood gets cited. So while it may be basic, a few paragraphs to a few pages explaining things will help the readers, and thus the authors.
You’d think that academic economists would be eager to optimize their citability.
Killing for Pennies, and is AOL, the “gateway drug”, cause or cure?
News in virtual gaming property continues to madly echo real life, as a man in China was sentenced for killing a friend after the latter sold his sword for a knight’s ransom – 7,200 Yuan (?473). As readers will know,…