Is Quantified Security a Weak Hypothesis?
I’ve recently read “Quantified Security is a Weak Hypothesis,” a paper which Vilhelm Verendel published at NSPW09. We’re discussing it in email, and I think it deserves some broader attention. My initial note was along these lines:
I think the paper’s key hypothesis “securtity can be correctly represented with quantitative information” is overly broad. Can you replace the term security with something more precice? For example, I would take issue with the claim “health can be correctly represented…” but there are lots of usefully measurable aspects of health. Also, I would argue that there are lots of useful things which are not correct. (In this I take the view that we can disprove hypotheses and thus come closer to correct, but the best we do is either “wrong” or “well tested and not easily shown false”) There’s testable/faslifiable, and there’s operational improvement, and neither requires correctness. That would lead to something like “information confidentiality can be made less bad through quantification,” which I think is nearly semantically equivalent (or can be made into a set of equivallent statements) which are stronger Popperian hypotheses. Going a little further afield, I’d like to offer up two alternatives:
“Information security is no different than other disciplines which
must be measured to be improved.”
“Information security is different from other operational/engineering
disciplines in ways which make quantification irrelevant.”
Anyway, it’s a thought provoking paper, and worth a look.
I’m so glad you posted this, Adam. I found the paper recently and thought it was a really good meta-analysis of the research that has been done to date on security quantification.
What the author has done, that I’ve never seen anywhere else, is to systematically collect and analyze the most prominant published academic research papers. He defines a taxonomy to compare and contrast the papers, which is a great contribution in itself. He also identifies common assumptions and critiques them.
Yes, the author is focusing on research that attempts to quantify security in the large, rather than the more specific and narrow things you mention above. So he is not saying that quantification is not justified by research for all aspects of security — only for the aggregate concept of security.
As someone who has been on a quest for such aggregate security metrics (framed as risk), I share the author’s conclusions about extant research. I take all of his critiques as being very constructive because he points to ways that research can be done better or different.
Furthermore, he’s careful to say that his meta-analysis does not prove or even support the hypothesis that the quantification of security is hopeless or impossible — only that it hasn’t been accomplished so far:
“It should also be apparent that in this paper, we are clearly not attempting to reject modeling or quantification as a fundamentally good idea. However, the effort of the survey allows one to observe limitations in much of the work on quantification so far.”
It may turn out that someone will prove the impossibility, or infeasability of such an effort. So be it. The research community needs this sort of critical analysis to point us in the right directions.
So glad to see you bring up this paper. I was at NSPW and it was one of my favorites. A great piece of work. Vilhelm does an extremely comprehensive job. And it’s just staggering when you see all the work that’s been done and the fact that we still can’t draw a solid conclusion one way or the other. That’s a slim return on effort. And worse, unless something changes, if Vilhelm repeated the exercise in 2019, there’d be many more papers to survey, but the lack of any firm conclusion looks like a pretty safe prediction.
There’s a nice paper on Strong Inference from Science in 1964.
They look at why some branches of Science advance much faster than others. And what Vilhelm targets is really at the heart of it. When people constantly ask
1. What experiment would falsify this hypothesis? or
2. What hypothesis does this experiment falsify?
things move rapidly.
If we can’t construct an experiment to falsify Quantfied Security, then it’ll just meander forever.
How many other areas in security share this failure? Training ourselves to constantly ask those two questions might be a good exercise.
Link to the Science article:
http://pages.cs.wisc.edu/~markhill/science64_strong_inference.pdf
@Cormac Thanks! This is a great paper. I’m going to start using the methods suggested immediately, and do what I can to influence others to use “strong inference”.
I’ve asked a lot of researchers (academic and industry) why they do research the way they do, leading to “weak hypothesis”.
First, they all acknowledge that their research isn’t as strong as it could be.
But they say, “I had to pick a method that was already accepted in my research community”, or “I had to simplify the model so it is analytically tractable”, or “I leave it to someone else to develop empirical tests”, or “It’s a waste of our time (industry) to test academic theories”, and so on.
Another strong force is the need to attack a narrow problem in order to get results. And anyone who does research on the Big Problems usually does so in a way that is disconnected from empirical reality, not just abstracted. (My fav example is the Gordon-Loeb model of optimal enterprise spending on security.)
Another factor is research funding. We just don’t have enough funding support for the right interdisciplinary teams. I don’t think we need billions, but something more than the few million that is currently spent worldwide. Otherwise, you don’t get a critical mass of researchers who see it in their professional interest to do the “strong inference” work.
Lastly, there is lack of role models. Once a few brave, smart scientists do “strong inference” work, they can set a standard and example for others to follow.