RiskIT – Does ISACA Suffer From Dunning-Kruger?
Just to pile on a bit….
You ever hear someone say something, and all of the sudden you realize that you’ve been trying to say exactly that, in exactly that manner, but hadn’t been so succinct or elegant at it? That someone much smarter than you had already thought about the subject a whole lot and there’s actually formal studies and definitions and so forth, and you feel dumb because there’s no way you could have actually googled for the subject in that way, but there it is on wikipedia in black and white? Happens to me all the time.
Was reading the following from the NYT this morning on the Dunning-Kruger effect, and had a little bit of synchronicity when I realized that my entire problem with certifying people about risk and controls has to do with exactly this subject. My issue with ISACA and CRISC is that because I know that there’s so much that I don’t know, and indeed – that we,the infosec industry does not know, and so in my mind if we wanted to rationally, ethically “certify” someone about risk and controls as a domain expert, about the only thing we can do is to test that they are aware of their (and our) limitations. To do otherwise seems rather irrational to me.
TAKE ALEX’S “APPARENTLY OK” RISK PROFESSIONAL EXAM:
A dozen years or so ago, I was PM for a firewall product that had to go through certification. The certification involved several different tests, but at the time the key to certification was simply that your firewall would not pass packets when set in default-deny state. And it cost a lot to certify. This upset me to no end, mainly because I’d have rather spent the $50k certification cost on building a new GUI.
Knowing my frustration, one of the engineering team printed out Marcus Ranum’s “Apparently OK” firewall certification and taped to one of our boxes and congratulated me on getting certification.
With that spirit in mind, and with all apologies to Marcus, let me present Alex Hutton’s Apparently OK Risk Professional Certification Exam! Because frankly, the problem isn’t with “using risk management” – no I’m still a very big proponent of that. The problem is that risk analysis is steeped in critical thinking, and not identifying uncertainty is, well, less than professional in my opinion.
THE ALEX HUTTON “APPARENTLY OK” RISK PROFESSIONAL EXAM
Dear Prospective Risk Professional,
Congratulations on deciding to enter the exciting field of Information Risk Management! Your journey will be confusing, frustrating, and if you get off on performing sisypheantasks, rewarding.
To achieve a state of “APPARENTLY OK” we ask that you take the following exam. The exam is one question, and you have three (3) minutes to answer it.
For all the risk assessment methodologies inventoried by ENISA (http://rm-inv.enisa.europa.eu/rm_ra_methods.html) and for RiskIT, please tell us the how the assessment methodology is fundamentally incapable of delivering the results claimed.
BONUS: Doing so in a total of two sentences.
There are multiple right answers.
Good Luck!
Typical risk assessment methodologies assume all threats are known, quantifiable, and relatively static. The dynamic nature of attacks and the vectors they use are underappreciated, which essentially keeps these risk assessment methodologies behind the curve.
So long as your methodology can cope gracefully with a highly dynamic Risk Catalog, this shouldn’t be an issue.
-chandler, AH”AO”RP
Since it’s not a multiple choice and grading might be subjective, you may want to consider making the certification a bit more flexible like “APPARENTLY OK – MEDIUM”.
@Jay,
Instead of “MEDIUM” can I use “AMBER”?
Anyone who can’t deal with the inherent uncertainty of a subjective certification is not qualified to hold it 😉
@Alex, it’s your certification, just be sure to include a descriptive rubric with terms like “generally”, “lots” or “mostly” to avoid confusion.
You can also see the anosognosia phenomenon at work in the I have “nothing to hide” idea, which is well dissected in Daniel Solove’s book of the same name. Put differently if uncomfortably, now that technology is in a positive feedback loop, our individual and collective ignorance increases and can only increase at an increasing rate. The machines win, and thus the question is whether you care, whether your knowledge of your mounting ignorance in comparison to the machines is fruitful. Are the folks who say “whatever” going to be the happiest? Is the desire to know, as opposed to the willingness to accept, trending towards being an anomaly (noting that once it is sufficiently anomalous it will be thought a pathology). Is there a threshhold of irreversibility and is it now in the past?