What is Risk (again)?
The thread “What is Risk?” came up on a linkedin Group. Thought you might enjoy my answer:
Risk != uncertainty (unless you’re a Knightian frequentist, and then you don’t believe in measurement anyway), though if you were to account for risk in an equation, the amount of uncertainty would be a factor.
risk != “likelihood” (to a statistician or probabilist anyway). Like uncertainty, likelihood has a specific meaning.
What is risk? It’s a hypothetical construct, something we synthesize in our brains to describe the danger inherent in the various inputs we’re processing around a certain situation. Depending on the situation, it can be either very difficult to create an “immaculate” equation for risk (such as R = T x V x I), or in many cases, impossible.
As an example, in IT, and especially for a large enterprise, we may have a complex adaptive system with characteristics of strong emergence. We see the same thing in medicine and various fields of biology. As such, point probabilities are pretty much impossible, or require so much simulation effort as to be difficult to produce.
Also, because it is a hypothetical construct we create in our brains, risk is also subject to the perspective of the observer. It is poly-centric. And because humans have a very difficult time divorcing their own risk tolerance derived from their own internal ad-hoc assessment from the information they have and the way they’re required to use that information, the true nature of the inherent danger, the majority of the “risk” is left unexpressed.
In my opinion, it’s important to dwell on these two pieces of information (risk may apply to a CAS with strong emergence, risk is subjective to the viewer) because they explain why the information security bureaucracies (ISACA, the ISO, NIST, most standards bodies, in fact) do us a huge disservice.
First, what our standards bodies do is typically do is enable us to justify our perspective by manipulating the inputs into a completely false model (jet engine x peanut butter = shiny!). This is the first significant way we give false (or at best, such poor information as to be incapable of creating a state of knowledge) information to decision makers.
Second, standards bodies, in the rush to provide value through “certification” have prematurely standardized processes to do “risk management.” This is the second way we are left giving false information to decision makers. Standardization without acknowledging the nature of risk (CAS, emergence, poly-centrism) results in the analyst ignoring critical pieces of the complex system that certainly contribute (sometimes significantly) to a full understanding of the situation.
Bottom line, IT risk is something created without being understood. It is the most important concept in information security, and the most abused. Until we have data, evidence of significant quality (see evidence-based practices) we cannot derive sane models, we cannot begin to understand the problem space.
As such, “risk” probably encompasses all of the above statements made in this thread, while in truth not resembling them at all (1).
1.) The thread was full of people explaining their “likelihood x impact” models. Variations on a theme, mainly.