Oct 16, 2015

The Makings of Quantitative Risk Assessment

by admin

Contents

Share

This is the third post in our ongoing series on IT risk assessments. In our first post we established critical foundational concepts and considerations. In our previous post we discussed different frameworks and how to best make use of them. In today’s post we will delve into the topic of qualitative versus quantitative risk assessment methods. This topic is important because there is much quackery in the industry claiming to be quantitative while masquerading as bad mathematics. We will get into some of the dos and don’ts of quant, including how you can start applying quantitative techniques now, regardless of program maturity.

Using Numbers Is Not Inherently Quant

Many tools, models, and methodologies like to claim that they provide a quantitative risk analysis capability, but there is a great deal of misunderstanding and misperception around what is and is not “quantitative analysis.” In fact, it is quite common to find that there isn’t anything truly quantitative happening, despite some rather complex calculations, all because the creators of the method or formula have failed to take into consideration foundational principles of statistical mathematics.

Just because your “assessment” (or, more often, your data collection tool) makes use of numbers does not mean that you are doing quantitative analysis. In fact, depending on the type and nature of the numbers being used, and the subsequent manipulation of those numbers, you might be breaking mathematical laws in addition to not doing quant analysis.

Specifically, an understanding of this topic must start from foundational concepts, such as understanding the difference between categorical data (for example, labels like high, medium, and low), ordinal data (such as used in ranking and prioritization, as in first, second, third), and real number data (either actual measured values or estimated measured values). Only the latter case (real number, or numerical, data) generally provides the basis for quantitative analysis. As a general rule, only numerical data can be acted upon using standard arithmetic.

For example, if I ask you to take a list of five attributes (categorical data) and rank them in order of importance from 1 to 5 (ordinal data), then we are most definitely not doing a quantitative analysis. We’re doing a simple ranking exercise. You can take all the ranked scores for each of the attributes and then average them out to help determine which attribute was deemed “most important” and so on. However, that’s about the extent of the arithmetic that you would be allowed to do on categorical and ordinal data.

Now… here is where things can start to get tricky. You would not take this data and add all the values together and/or start multiplying them by random weighting factors. You wouldn’t decide “Today, a 1 (“most important”) is worth 100 points, whereas a 5 (“least important”) is only worth 10 points.” You would not then add and multiply and perform logarithmic derivations. You collected ordinal stack-rankings, not real number data, and to treat it otherwise ends up violating important mathematical principles.

Sadly, this is exactly what we see happening time and time again in all manner of “risk assessment” programs. We see categorical rankings like Critical, High, Medium, Low, and Very Low – that are then converted into arbitrary numerical values and acted upon arithmetically in violation of statistical rules. While it is ok to associate those labels with ordinal values in order to calculate a straight average (because there’s an implied ordinal ranking), you cannot arbitrarily assign real number values to these labels and then start applying quantitative analysis techniques using arithmetic.

This point is often very confusing to people. We have seen many examples of elaborate spreadsheets that collect variously ranked data and then do some absolutely confounding arithmetic calculations that result in things like single arbitrary numbers that, ultimately, not only have no inherent meaning, but are really reflective of any number of biases (from assumptions) being introduced into the calculations, often without meritorious explanation.

If there is one thing you take away from today’s post, let it be this: Just because you are using numbers does not mean that you can perform standard arithmetic on those numbers. It is incredibly important to understand foundational statistics principles and realize that ordinal rankings are essentially a form of categorical data, which means you cannot rightly add, multiply, etc. After all, you would never say Ford + Chevy + Audi = 79, right? Nor would you even take it a step further and say “3*Ford + 2*Chevy + 100*Audi = 79.” These statements may seem absurd, and yet if you look at many “risk assessment” methods in practice today, we see exactly this happening, except Ford is High, Chevy is Medium, and Audi is Low (or some such). Beware quantitative analysis claims!

Getting a (Real) Start With Quantitative Analysis

Now that you have been suitably warned about bad math masquerading as quantitative analysis, let’s now look at ways in which we can apply real, legitimate quantitative methods in a manner that will benefit your program, regardless of program maturity.

First and foremost, a great start for quantitative analysis is, in fact, to apply it during context setting and not in the risk assessment itself. Specifically, a key hurdle to clear in any risk management program is establishing a reasonable, rational basis for business impact. What’s important to the business? What sort of (financial) losses can the business incur without experiencing “material harm” (a legal, meaningful term)? What lines of business or applications or systems or services provide the most and/or least amount of revenue, and what is their tolerance for disruption?

Answering these questions can provide a valuable basis for starting with quantitative risk analysis. Note that we haven’t even started to delve into the topic of probability estimates at this junction. Keep it simple. Start establishing actual, ranged value estimates (ranges are always best – see Douglas Hubbard’s book How to Measure Anything). Speak with people in the organization who can authoritatively answer these questions. Do not simply rely on your own best guess, nor should you stay simply within the IT department in hopes that techies can magically intuit actual business sensitivities (it turns out we’re not very good at estimating business impact).

Once you have successfully established an approach for collecting basic impact information, then and only then does it make sense to look at maturing practices to get into more advanced quantitative topics, including probability estimates. However, in moving onto these more advanced stages, we highly recommend having a good grounding in statistics and/or data science. You may find a method like Open FAIR to be of interest (as discussed in the last post), and the associated Open FAIR training (from The Open Group) may be useful. However, you need not adhere to any single method and are encouraged to thoroughly explore statistics and data science to better understand the correct ways to create, test, and refine quantitative models for your organization.

Right-Sizing Risk Assessment Efforts: Do You Even Need Quant?

One question you might be asking at this point is just how much quantitative analysis is worthwhile, and if it’s worth using it. We think the answer is definitely yes, to a point, but perhaps falling short of full-fledged decision analysis and management (it’s still fairly rare to see decision trees in action in the real world, for those who may have experienced those in academia).

The simple fact is that organizations have been muddling through without quantitative analysis all this time, and they seem to be surviving. In fact, this statement can be generalized and broadened to point out that, despite a lack of reasonable security protections and in the face of massive breaches, nobody is saying “Oh, what a pity seeing all those empty store fronts with the red bullseye logo.” or “Remember when we could go buy home improvement products from those large, orange-signed warehouse stores?” Despite the losses piling up, businesses are proving to be remarkably resilient, even if just out of sheer luck.

So… to the question at hand… do we even need quantitative analysis? How do we “right-size” our risk assessment activities?

The answer, simply, is this: You’re already doing risk assessment, whether or not it’s formalized. You’re weighing options. You’re roughly considering pros and cons. You’re trying to balance tradeoffs and hoping that your decisions are good ones that improve value while decreasing loss potential. You are likely considering business impact, albeit in a vaguely qualitative manner. For that matter, we do risk calculations in our heads every day. “Should I get on this airplane?” or “That fish smells funny, should I really eat it?” or “Let’s not drink the scummy green water that smells of petroleum byproducts.” are all examples of the kinds of risk management thoughts that pass through our brains every day. For the most part, we’re fairly good at making decisions.

The question, then, is if we can get better at making decisions, and how to best go about doing that without falling into “analysis paralysis” (being unable to make a decision), without making decisions worse (such as due to relying on bad assumptions), or creating processes that are so slow or unwieldy that they are bypassed or too inefficient to be worthwhile.

Yes, this can be done. No, it need not be excessive or inefficient. It may be as simple as establishing some baseline estimates for business impact in key areas, from which you can then drive short conversations that say things like “We know that if this application/service is down for an hour during peak business hours, it will cost us X dollars per hour. Thus, we should look at investing into the resilience of this application, up to ‘X’ dollars, to ensure that we are reasonably protected against downtime.” Notice, again, that at no point do we need to go down the rabbit hole of probabilities. Rather, it’s a better-informed conversation.

As we become comfortable with introducing basic quantitative (real number) values into a conversation in order to drive more rational decision-making, then and only then can we look into better formalizing processes and discussions, and then and only then can we start getting more elaborate in our calculations, likely leveraging tools to help speed data collection and calculations (including using various statistical models and methods). Until that point, begin with what you can, where you can. Slowly change unfounded “belief state” assertions to being fact-based, and then iterate and evolve from there.


In our upcoming fourth and final post in the series, we will conclude by looking at how to leverage platforms to improve risk management programs. We will take a look at common ad hoc practices (Excel! SharePoint!),evaluate pros and cons of using a platform, and end with a discussion about how leveraging platforms can lead to improved communication and visibility into risk states.

Read Next

< Prev

Leveraging Standards for Risk Assessment

Next >

Brinqa continues to transform Vendor Risk Management with the integration of SecurityScorecard