Shostack + Friends Blog Archive

 

Towards an Economic Analysis of Disclosure

In comments [link to http://mt.homeport.org/cgi-bin/mt-leaveone.cgi?__mode=view&blog_id=10&entry_id=854 no longer works] on a my post yesterday, “I Am So A Dinosaur“, Ian asks “Has anyone modelled in economics terms why disclosure is better than the alternate(s) ?” I believe that the answer is no, and so will give it a whack. The costs I see associated with a vulnerability discovery and disclosure, in chronological order are:

  1. The cost borne by a researcher who finds a vulnerability. This may be the time of a student, or it could be a fiscal cost borne by a company like NGS or eEye. Laws such as DMCA drive these costs up greatly. There is a subset of this cost, which is that good disclosure costs the reporter more. Good disclosure here includes testing on a variety of platforms, figuring out workarounds, and documenting the attack thoroughly.
  2. The set of costs incurred by the software maintainers. Every time a vulnerability is discovered, someone needs to evaluate it, and decide if it is worth the expense of writing, testing, and distributing a patch.
  3. Costs distributed amongst a great many users include learning about the vulnerability, and perhaps the associated patch, deciding if it matters to them, testing it, rating the urgency of the patch versus the business risks associated with a change to operational system. [If the users don’t make that investment, or make poor decisions, there’s a cost of being broken into, and recovery from the problem. (Thanks, Eric!)] Highly skilled end users may want to test a vulnerability in their environment. Full disclosure helps this testing. Good disclosure from the researcher also helps hold down costs here. Since there are lots of users of most such software, savings multiply greatly.
  4. Costs distributed amongst a smaller group of security software authors include understanding the vulnerability, building or getting exploit code, and adding functionality to their products to “handle” the vulnerability, either scanning for it, or detecting the attack signature. Where these vendors have to write their own exploit code, they will be slower to get their tool to customers. These costs are sometimes lower for vendors of closed source toolsets who can encode the information in a binary, and thus get it under NDA.
  5. Costs to one or more attackers to learn about the vulnerability; decide if they want to use it in an attack; code or improve the code for the attack; deploy the attack.
  6. Costs to academic researchers are separated here because academics are less time sensitive than security vendors. I can invent and test a tool to block buffer overflows with a 30 day old exploit as well as with completely fresh exploits. Academic researchers need high quality exploit code, but they don’t need it quickly.

I think that all responsible disclosure policies attempt to balance these costs. Some attackers don’t disclose at all; they invest in finding and using exploits, and hope that they have a long shelf life. (I started to say malicious attackers, but both government researchers and criminals fail to disclose.)

Ideally, we’d drive up attacker costs while holding down all of end-user, security vendor, and academic costs. (One of my issues with the OIS guidelines is that they give too little to the academic world. They could easily have said ‘responsible disclosure ends 90 days after a patch release with the release of exploit code and test cases.’)

So, Ian, I hope you’re happy–you’ve distracted me from the stock market question.

[Update: Reader Chris Walsh points to a paper, “Economic Analysis of the Market for Software Vulnerability Disclosure,” which takes these ideas and does the next step of economic analysis, as well as a presentation that some of the authors gave at Econ & Infosec.]

8 comments on "Towards an Economic Analysis of Disclosure"

  • Towards an Economic Analysis of Disclosure

    Adam says an economic analysis of Disclosure (of security bugs) has never been done, and makes a good start at it (perhaps in order to distract me from the stock market losses…). His list of costs are: 1. researcher, 2….

  • adam says:

    I had missed both of those, thanks!
    Kannan, et al use a model with fewer participants:
    “There are four types of participants in this marketplace – the information intermediary, benign identifier, malign identifier and software users.” I think that the other participants I’ve identified are useful, but I like their characterization of the market models. They also take things much further than I have–their work goes through the next steps that I hadn’t thought through in any depth.

  • EKR says:

    It seems to me you’re missing the cost to the users of having the vulnerability used on their machines.

  • Pete says:

    I think it is extremely important to factor in the point of view here. Costs are borne by at least four constituencies here – researchers, software developers, users, and attackers, but they are on both sides of the equation, aren’t they? These are opposing forces, as far as I can tell.
    I would take the point of view of the enterprise by comparing the costs of my supplier plus the cost of my defense and response to the cost of the researcher plus the cost of the attacker. And don’t forget that there are (must be) significant benefits, intangible or not, to researchers and attackers, or they wouldn’t pursue it.

  • Adam says:

    “but they are on both sides of the equation”
    That’s exactly my point — each interested parties has the option to invest in certain activities with a variety of payoffs. In manipulating the costs and payoffs, we’re seeking some optimum. Rahul Telang and company, in their papers point out that there are a number of ways we can attempt to measure if we’re neutral. If we’re not neutral, if we care much more about our own costs, then we seek to minimize our costs at the expense of the other players. There’s a lot of this going on, all along the spectrum.

  • Security Breach Disclosure is required for the consumer to adjust risk assessment

    I was knowingly guilty of asking an innocent question last week on economics of disclosure. My penance will be forthcoming, no doubt, but in the meantime the question rebounds in the RFID breach post of yesterday. Jim posted:…

  • Full disclosure: for and against

    How to address Internet security in an open source world is a simmering topic. Frank Hecker has documented his view of the Mozilla Full Disclosure debate that led to their current security policy. I haven’t read it yet, but will….

Comments are closed.