• Categories: Risk
  • Author: Graeme Nahkies
  • Published: Mar 3, 2021
  • share on linkedin
  • share article

Many boards are little more than passive recipients of a risk register. It is as if the mere appearance in the board meeting pack of the largely management-created risk register is assurance to the board that it can tick the risk management ‘box’.

Even if that is not the case for your board, we encourage you to read and discuss with your board the implications of a recent article by Norman Marks in which he asks ‘What is wrong with a typical risk register?

 

In answering this question, Marks states unequivocally that periodically managing a list of risks is insufficient and makes the following points about the usefulness of a risk register.

 

  • A risk register is static—a list of risks, updated occasionally. Managing a list of what could go wrong is not the same as considering how best to achieve objectives, which requires understanding what might happen as part of every decision and needs more than a periodic discussion.
  • The risk ratings in a risk register (e.g. low, medium, high) are practically meaningless. They do not address how an adverse event would affect the objectives of the organisation. Consequently, it leads “…those who review a risk register to note it with interest…[without knowing]… how important the issues are, especially when compared to other matters needing their time and money.”
  • A risk register leads to managing and mitigating individual risks in silos instead of considering all the things that might happen—the big picture—to determine the best cause of action and how much to take of which risks.
  • A list of risks focuses only on what might go wrong, ignoring the possibilities of things going well.
  • Risks (and opportunities) are not single points, but a range of potential effects or consequences. Each point in that range has its own likelihood.
  • A risk register talks about the likelihood of a risk event when it should be talking about the likelihood of the effect of that risk.
  • Different individuals are likely to assess risks according to their own perspective, bias and interest. Attempting to boil these different assessments down to one ‘value’ for assessing likelihood and impact at an enterprise level is not commensurate with effective risk management.

Marks argues for evaluating a perceived risk (for example, the inadequate definition of the purpose and need for a proposed project) in a way that enables board members to come to a shared assessment that makes business sense—recognising that a single risk is just one among a number of risks and opportunities that need to be considered.

Central to this evaluation would be a series of questions such as, for example:

  • Why is this project needed? How does it relate to enterprise objectives? Why does it matter and how much does it matter? What is important about it?
  • What would happen if the project is, for example, delayed? What if it does not deliver all the required functionality? How likely is it that everything will be perfect? Could a failure affect other enterprise objectives?
  • How should we measure the consequences [of failure]? Are traffic light ratings (high, medium, low) meaningful? Should a dollar figure be used in, for example, estimating additional costs and revenue losses? Would that help us make the right business decision?
  • What is the worst that could happen—and what is its likelihood?

This kind of inquiry would continue until a range of potential effects (or consequences) and their likelihoods are agreed on.

Marks still sees value in identifying and periodically taking stock of risks and opportunities that are so significant they merit a continuing level of attention, but he emphasises that such a list has to show why these risks and opportunities are important. “Saying [a risk] is ‘high’ means nothing. It is imperative to explain how it relates to the achievement of objectives.”

Marks highlights the importance of actionable information that helps leaders understand whether they are likely to achieve what they have set out to achieve. What makes the most sense is reporting the likelihood of achieving objectives, considering all the things that have happened, are happening, and might happen. They can determine whether that likelihood is acceptable and decide what actions are needed, if any.

For governing boards, the implication of Marks’ analysis is that they should:

  • Question the value and use of a risk register at the board level
  • Ensure that attention to what might happen (both for good and harm) is an integral part of both strategic and tactical decision-making throughout the organisation
  • Monitor on a regular basis the likelihood of fulfilling organisational purpose and achieving desired outcomes, considering what has happened, what is happening, and what might happen
  • Identify and pay close attention to risks and opportunities that might considered be ‘mission critical’—but in terms of their potential to affect the business and the achievement of its primary objectives, in both the short and longer term.

There is one further observation we would add. Risk Registers do not readily help in assessing the aggregate risk of several different risk factors occurring at the same time. The impact of the COVID pandemic, which was probably not even on most risk registers, has compounded other risks in a way that would not have been anticipated. It has shown that many organisations were not as future-proofed and resilient as they thought.