Reporting on cyber risk is a table stakes initiative for information security leaders. After speaking with key stakeholders within organizations, recurring questions for CISOs and cybersecurity leaders have been:
- What are our top cyber risks?
- Are we effectively managing our cyber risks?
- Are we investing in the right cyber controls?
- How do we evaluate the effectiveness of our information security program?
- Are we spending enough or too much?
When dealing with qualitative risk modeling that looks at matrices showing likelihood and impact with loosely defined categories of “high” or “critical”, we come across a number of limitations.
To begin, thresholds aren’t well defined. The ceiling of a “high” isn’t easily distinguishable from the floor of a “critical” without measurements. Thus, there’s no associative, measurable explanation of whether cyber risks have materially increased or decreased.
Secondly, the risk tolerance level isn’t typically found within the risk matrix readout. The absence of an overlay of risk appetite/tolerance is a big miss. Without applying this to risk tolerance, the risk readout is incomplete and the relevance is missing. If an organization’s risk tolerance levels can sustain a higher level of risk in certain areas, then stating higher risks in those areas can be informative, but unworthy of immediate focus.
Thirdly, financial relevance is a cornerstone to making informed business decisions in for-profit and not-for-profit organizations. Without an indicator of dollars of loss associated with the risk readout, how are organizations to know if prioritization of spend is aligned with the greatest potential risk? Akin to this is the knowledge of how much potential financial risk can be mitigated by making investments in cybersecurity related controls. With qualitative risk reporting, this is another gap.
Migrating to a quantitative cyber risk model of analysis and reporting allows for more accurate data, which leads to more informed decision-making. The shift is not an easy one for many.
What is interesting is that measuring cyber risk is a lot like measuring other risks. Yes, it is more of a recent phenomenon because of the innovation of technology’s evolution in housing and transferring data. But, at its core, the elements are quite similar.
There is still a reluctance to measure cyber risk in a more effective manner than the inertia-driven approach of ordinal scales (e.g., the risk is based upon the intersection of likelihood and the impact level). Why is there an allergic reaction to measuring cyber risk using a quantitative method? In speaking with Doug Hubbard, author of How to Measure Anything in Cybersecurity Risk, he pointed out some reasons.
One of the main reasons people give is that it is just too complex and/or difficult. It is seen at the same difficulty level as desalinating the ocean. It is a more astute approach, but due to inherent biases and/or ineffectiveness in conveying cyber risk measurements, practitioners have been led to believe the juice is not worth the squeeze. However, according to Hubbard, “Many organizations use these methods right now, even when their backgrounds had nothing to do with quantitative risk analysis.”
Another reason is due to the comfort zone individuals have with remaining attached to the method they are most familiar with. Not wanting to leave the current methods of qualitative risk analysis, which gives us the fluffy indicators of low, medium, and high risk placement, is what leaves people standing in their own way.
In fact, Hubbard talks about a model already in play that is not doing the job well enough. That model is intuition. Much of risk-modeled assertions can be attributed to individual predilections or biases that can be unconscious, and therefore difficult (or improbable) to extract from the formula. This introduces a slant into the data input that affects the output in reporting on risk.
To Hubbard’s point, this is where the cybersecurity practitioner’s brain is not vastly different from the mechanical engineer’s or physician’s brain. All these brains carry bias and selective recall, to name a couple limitations. However, individuals today are still relying upon their judgment and experiences (as limited as they may be) to make assertions on rating risks. As Hubbard goes on to explain by analogy, he points out the reliance upon clinical trials as a basis of broader sampling for physicians to base their suggestions of medication. Relying upon the physician’s own sample size may not be sufficient in reducing enough uncertainty.
Within the context of migrating to quantitative risk analysis, the benefits are pivotal for those practitioners looking to demonstrate cyber risk in a more accurate manner by reducing uncertainty and demonstrating more business-relevant outputs. Whether it is embedding risk tolerance or applying financial relevance or departing from loosely defined terminology of high, medium, or low, the approach of measuring cyber risk quantitatively is directionally much more correct than the alternatives in use today.