Getting the time dimension right
If you are developing or using security metrics, it’s inevitable that you’ll have to deal with the dimension of time. It’s harder than it looks and I’ve seen many people make mistakes with it, and in doing so, rendering their overall metrics faulty or worse. The problems often start with our basic concepts and how we use words.
“Data” tells you about the past
“Data” is the output of some observation or measurement process. If your data is about some states of the world, then by definition your data lives in the past. You did your measurements or your experiments, generated your data, and then time passed as you assess it, report it, and act on it. Thus, your data is reporting on history. Only by acts of inference can you connect your data with the present state of the world or the future state.
In the physical sciences and engineering, they can safely assume that the system under study is the same over time — past, present, and future. This is called the ergodic hypothesis. In statistics, the underlying stochastic process is treated as stationary. This makes it possible to extrapolate the past into the present and future using regression and other techniques.
There are people in the security metrics community that only want to operate on data. They view anything that is not the result of empirical measurement is pure speculation or a dangerously-seductive “model”. (See Models are Distracting, and Measurement over Models) Being an engineer myself, I’m all in favor of empirical data, measurment, and experiments. But I contend that we will never get to measures of “security” or “risk” through empirical data alone. Our systems are non-stationary and non-ergodic.
“Security” is a judgement about the present
If we start with the simple high-level question: “Am I secure?”, it becomes clear that any measurement of security must relate to the present time (or possibly a retrospective view on a previous time, i.e. past perfect tense, or prospective view on a future time, i.e. “will I be secure?”). I call it a “judgement” because security depends on the threats you are facing. (I play a historically-realistic computer game with my son, called Total War, that includes features that allow you invest in offensive and defensive capabilities. How much to invest and how fast to invest depends on who you are facing. A wooden pallisade will be an adequate defense against peasants and spear militia, but hopelessly inadequate against onagers and trebuchets, backed by armored cavalry!)
Thus, you can measure anything and everything you want about security, generating tons of data, and in the end you will have to make a judgement: “Am I secure?” — or are my security provisions adequate given the threats we face? Seen this way, your data is really just evidence that is used in this judgement (and inference) process. What I mean by this is that I don’t think you can simply calculate your way from ground-truth data to any overall security metrics. There will always be a judgement or inference step(s).
Why? Because we must account for events, circumstances, and scenarios that haven’t happened yet, or happen so rarely that we have no relevant data, or are beyond the reach of measurements. (Afterall, the miscreants often do their best to hide their actions.) On top of this, the security landscape changes rapidly and occasionally dramatically. Our judgement about security must factor in these changes, to the best of our knowledge. Finally, our judgement about “are we secure?” is predicated on our risk tolerence. But what is “risk”?
“Risk” is a cost of the future, brought to the present
This is the economist’s definition of risk, where “cost” here means downside cash flows that are beyond some threshold of expectation or variability. Those costs become “risk” when you can account for them in present dollars using some discounting and insurance method. (This says nothing about the “insurability” of the risk, only about the theoretical possibility of accounting for risk in present dollars by some reasonable method. The “insurance method” might be diversification, hedging, self-insurance, risk pooling, contingent contracts, or traditional insurance.)
This parallels Peter Drucker’s characterization of profit: “Profit is … needed to pay for attainment of the objectives of the business. Profit is a condition of survival. It is the cost of the future. The cost of staying in business.” [emphasis added] Ontologically, “profit” and “risk” are in the same category, which is why it makes sense to measure “risk-adjusted return” and the like.
From the viewpoint of risk, what you have spent in the past is irrelevant (“sunk costs”). All rational decisions are based on future cash flows and options. The only value of the past is if it helps you predict or forecast the future. Thus, you can’t reach a final judgement about security in the present if you don’t also have some useful estimate of risk in the future. If the answer to “Am I secure?” is “Yes”, then the implication is that you can live with the risk associated with this level of security. By “useful”, I mean sufficiently discriminating to inform the judgement — “bigger than a breadbox, smaller than a house”.
This is where information security deviates from reliability engineering. In the latter, the ergodic hypothesis holds and the dynamics are sufficiently “tame” to permit statistical data analysis for inference and forecasting. Even when there are “humans in the loop”, their behavioral tendencies can often be characterized by stable probability distributions. In information security, we are dealing with adaptive, intelligent, strategic players — not only miscreants, but also “ancillary players” like end-users, auditors, supply chain partners, and so on. This makes risk estimation a “wicked problem“. But is it hopeless?
Estimating risk may be hard, but not impossible
Plenty of smart security people contend that quantitative risk estimation is impossible or infeasible in principle. Proving or disproving this assertion would take heavy-duty theoretical analysis (and I may do it some day). But for now consider two extreme situations.
Think of security and risk as a black-box process that generates a continuous stream of cash flows in time (i.e. total spending on security and losses in that time period). At one extreme, the output is a stationary function or stochastic process. This is the relm that Nicholas Taleb called “Medicoristan“, since the data stream is well-behaved enough that nothing very surprising happens. With enough historical data and enough data analysis, I think we’d all agree that risk estimation is feasable with current methods.
At the other extreme, the output is generated by a strategic agent (inside the box) whose sole purpose is to screw up our risk estimation process. Let’s call this Descartes’ Demon, after Rene Descartes, who introduces a skeptical scenario called the deceiving demon argument [link to http://anemptybasket.wordpress.com/2008/01/22/descartes-and-the-deceiving-demon-argument/ no longer works] to challenge our beliefs that an external world exists; in particular, it raises the possibility that some sort of malicious, demonic non-God, has “employed all his energies in order to deceive me”. If Descartes’ Demon can maintain history of the output and also has information about our risk estimation process, he can mimic any output pattern and change those patterns arbitrarily to defeat any estimation process we might apply. (This is more extreme than Taleb’s “Extremestan” in terms of defying estimation or prediction.) In this case, I believe it could be proved that estimation is impossible (or undecidable or infeasable from a computation point of view).
Some people might argue that information security is exactly in this latter extreme situation, but I don’t think so. The reason is that all the players have much stronger motives and forcing functions than to subvert the risk estimation processes. Bad guys want to make money or cause harm. End users want to avoid hassles and minimize effort and get their job done. Managers want to manage their business while avoiding negative repercussions. All of these factors add some elements of predictability and understandability.
But it may only be possible to factor all of these in through the use of models and simulations that represent our best knowledge, our best estimates, and our best beliefs about how they all relate to each other and the overall results.
The marriage of data, security, and risk = social learning processes
Putting this all together, we need to gather a lot of empirical data to understand relationships, patterns, and dependencies. But to measure security we need to add inference and judgement processes that extend our data into the present, given the threat landscape we believe we are facing. But to make a judgement about security and make decisions about alternative security postures, we need a useful estimate of risk to decide how much security is enough. To tie these all together over time requires effective social learning processes, including model validation through experiments and data analysis. Likewise, risk estimation and security judgement processes tell us what data we need to collect and how to analyze it.
Whether you agree with this framework or not, you should make explicit and consistent definitions of the time dimension relative to your metrics.
Great post! I find it encouraging that others feel the same way about risk management that I do. I am shocked, and remain shocked, whenever I hear someone assert that it is neither possible or worthwhile to endeavor to manager risk. It kills me. Also, I love the Taleb and Descartes references…for that alone this post is righteous 🙂
This is nice post and something that needs to read a couple of times to really get all that I believe you’re trying to say.
But this line…”it raises the possibility that some sort of malicious, demonic non-God, has “employed all his energies in order to deceive me”.” suggest by the very nature of the article that you or we may be deceived right now, even as we read the article.
Interesting thoughts.
Thanks, Mike.
Regarding: “…by the very nature of the article that you or we may be deceived right now, even as we read the article”, many a philosopher has been driven to drink by that very conundrum!
Side comment: Now that I’m an academic 🙂 I might do some analysis on the computational complexity and informational requirements to be successful as “Descartes’ Demon”. Seems clear that no real-world adversary could accomplish those goals for any length of time while also pursuing other goals, all the while operating with some sense of economy. This includes the much-hearlded “Advanced Persistent Adversaries” (APA).