Shostack + Friends Blog Archive

 

Black Swan-Proof InfoSec?

I came across an interesting take on Nassim Taleb’s “Black Swan” article for the Financial Times via JP Rangaswami‘s blog “Confused in Calcutta“.   Friends and folks who know me are probably tired of my rants about what I think of Taleb’s work and what I think he’s gotten wrong.  But really, I find his FT article interesting as it’s giving us “principles” of how to “Black Swan-Proof” the financial sector.

So assuming Taleb’s got something here, and I really liked how JP applied the concepts to open source software development, I  figured we might find these principles to be applicable to our early attempts at information security management science.  So to borrow JP’s idea:

Taleb’s Black Swan-Proofing Principles Applied to Information Security:

1. What is fragile should break early while it is still small. Nothing should ever become too big to fail.

Taleb is talking directly about AIG and other financial services firms here.   I see applicability to network security and specifically wonder aloud if this principle wouldn’t be exemplified in a Jericho-esque approach.  That is to say, our perimeters are now large, porous, but by current security practices we make them too big to fail.  But when they do fail, they fail spectacularly.

2. No socialisation of losses and privatisation of gains. Whatever may need to be bailed out should be nationalised; whatever does not need a bail-out should be free, small and risk-bearing.

In his interpretation for Open Source software, JP says that ““losses” are borne by individual contributors, “gains” are shared by all participants.” I’m not sure that losses in Open Source software aren’t borne by all participants (vulnerabilities are distributed to all participants), but…

What I think Taleb is saying here is that in the current system he expects the taxpayer to bear all the risk and reap none of the reward (as directly as our outlay in taxes is taken from us).  In InfoSec, maybe we could suggest that the CISO’s office is at times set up to take all of the consequences of risk decisions made by other lines of business, but “gains” in risk reduction are difficult to reward back to IRM without a really, really advanced risk management program.

3. People who were driving a school bus blindfolded (and crashed it) should never be given a new bus. The economics establishment (universities, regulators, central bankers, government officials, various organisations staffed with economists) lost its legitimacy with the failure of the system.

Many of Taleb’s detractors talk about his work being ad hominem, almost seething rage at those who discredited him in the past.  Maybe so and maybe this clause is a little too close to that for direct rational interpretation.  But let’s play with it a little and see if there’s anything we might be able to stretch out of it.

If we strip away the political and personal nature of it, we might say that Taleb is saying we shouldn’t continue to do the things that are proven to fail.  We might say that InfoSec attempts to continue to do things that fail (see Gunnar’s post here), or that we continue to enforce draconian policies that reduce very little risk because they are “best practice”.

4. Do not let someone making an “incentive” bonus manage a nuclear plant – or your financial risks. … No incentives without disincentives: capitalism is about rewards and punishments, not just rewards.

We might interpret this clause to suggest that executive management be rewarded *and penalized* for getting risk tolerance (or the resources needed to achieve tolerance by the CISO) wrong.

5. Counter-balance complexity with simplicity. Complexity from globalisation and highly networked economic life needs to be countered by simplicity in financial products.

I think a lot of people would like to take legacy systems, applications, and networks – light them on fire, toast some marshmallows, and start all over from scratch.  We build and inherit complex systems that manage simple but valuable information (valuable both to the company and to the attack community).  In a sense, we’re all sitting on CNO (Collateralized Network Obligations) – complex information exchange vehicles and we don’t really have 100% certainty about what’s in them or when they’ll blow up.

Unfortunately, I don’t have any answers here.  Can’t help.  And guess what?  There are “clouds” on the horizon (sorry, I do realize how tiresome these cloud computing metaphors are getting).

6. Do not give children sticks of dynamite, even if they come with a warning . Complex derivatives need to be banned because nobody understands them and few are rational enough to know it.

OK, so given what I wrote above, we can’t take the network away from the business (no matter how some ‘cybercop’ types try).

But maybe we can apply this principle to suggest that we should make certain that risk expression is done for IT projects.  C&A processes with data owner sign off might help the business at least understand why we need to test the web app before it goes into production.  Using risk expression to remove “childlike” naivete from the other LOBs.

There’s also an alternate way to interpret this for network security that might make sense.  In talking about applicability to Open Source Software, JP discusses experience as a means to generate the desired “maturity”.  We might say that people making security decisions should have security experience.  Maybe all up and coming CIO types should spend time as a CISO first?

7. Only Ponzi schemes should depend on confidence. Governments should never need to “restore confidence”.

What I think Taleb is saying here is that an investor who is certain that all real investments carry risk will know that there is no “risk free” return.  In InfoSec, I see this as meaning that regular, rational, and real risk communication is required (sorry for the alliteration) to executive management.  Or else, they’re investing in a Ponzi scheme that promises to return absurd rates of C, I, & A.

8. Do not give an addict more drugs if he has withdrawal pains. Using leverage to cure the problems of too much leverage is not homeopathy, it is denial.

When I read this, I thought immediately of how some organizations try to answer the problems created ultimately by the complexity of the network environment tend to then throw complex security architectures at the problem.  It’s the “add another control” addiction we’re all familiar with.  How many laptop security agents do you need before you’re “secure” enough?

I once saw a smart organization that was constantly asking – “if we spend money on this, what do we decommission?”

9. Citizens should not depend on financial assets or fallible “expert” advice for their retirement. Economic life should be definancialised.

Yeah, I don’t know about this one (both in terms of his recommendation there for the financial crisis and in terms of how we might address it).  JP, in talking about applying it to open source software, says “Rely on something real. Code. Code is King. Not slideware.” If I’m reading this correctly, Taleb is suggesting that we throw away risk analysis for that which we care about the most and cannot stand to lose and just build total security.  Unfortunately, we all have budgets.

10. Make an omelette with the broken eggs. Finally, this crisis cannot be fixed with makeshift repairs, no more than a boat with a rotten hull can be fixed with ad-hoc patches. We need to rebuild the hull with new (stronger) materials; we will have to remake the system before it does so itself.

Applied to network security, that suggests that we begin to remake networking & computing in a secure manner (love the allusion to ad-hoc patches, remind you of anything?)

One comment on "Black Swan-Proof InfoSec?"

  • hi, I would just like to comment on the first point

    “1. What is fragile should break early while it is still small. Nothing should ever become too big to fail.”

    which seems quite relevant to IT, and CII in particular. You mentioned network security but the problem is more general as Marcus Ranum has recently posted on. CII is now on the risk landscape of the World Economic Forum as a major risk, in terms of money and loss of life (see the 2008 report). Over time the impact or a worst case failure has grown, and will keep increasing. I am not sure how to measure the fragility of CII but as Ranum remarks, even a contant failure rate/probability given an increasing impact is bad news. Web 2.0 can’t be a good omen either, with facebook having 150 million users – something this large cannot afford to fail.

    regards Luke

Comments are closed.