Economic and corporate crises in our recent history have taught us that solid risk management, communicated effectively to the top, works. Yet often, it never reaches the decision makers, writes Jonathan Blackhurst, head of risk management at Capita

Much has already been said across the risk management community about the failed resilience of markets, businesses or economies, as a whole, when uncertainty and crisis strike.

At a very basic level of analysis, when you investigate crises such as the 2007/2008 economic crash, the collapse of Enron/Arthur Andersen, or the product recall scandals of the 2000s, to name but a few, you can see that many stakeholders were simply unaware and inadequately prepared to deal with major risks and failed to consider them in their corporate decisions.

The large-scale media and public scrutiny of the impact of this lack of preparation has historically been most visible among financial institutions – with even the pages of Vanity Fair becoming dominated with news of the financial meltdown and scandal – but post 2018, companies from all sectors have still, over time, been hit hard by unforeseen events that have resulted in detriment to the company and, at their worst, collapse.

Some of the shortfall in preparedness, and I stress only some, can be explained by the challenge organisations face in being able to respond to ‘black swan’ events. By their very definition, these types of events must be outliers, beyond the realm of regular expectation, because experience, analysis and existing data cannot point to their occurrence. There is little scope for those wishing to limit the impact of truly black swan events to use any meaningful risk analysis to influence their decision-making upfront.

But if we therefore leave analysis and lessons learnt from such events outside the scope of this discussion, what else can be learnt from previous corporate and economic crises? Can we gain any valuable insight from understanding how risk models failed? Why had the oversight of risk become superficial? What stopped risk management information from being well integrated into a company’s decision-making system? Why did they stop listening?

Not-so blissful ignorance

Despite the huge losses experienced by organisations at the height of any crisis, the majority still survive. They cancel plans, make rapid and painful trade-offs to ensure immediate stability and, as the tide turns, these same companies resume their long-term strategies even if it takes years. And with the benefit of lessons learnt, these organisations should surely now be better prepared for the next shock, whenever it comes.

But how can this be guaranteed? Have there been tangible learnings taken to the heart of the boardroom or will there be continued vulnerabilities that will manifest themselves in the next risk perfect storm?

One of the elements common in corporate crisis is that risk information does not flow freely from the business to the board to provide necessary insight to the decision-making process. Research by Airmic points out that without the right flow of information and without risk information being viewed in the correct decision-making context, things might well be known within an organisation, but not necessarily amongst the decision-making leaders or their proxies.

Problems will therefore flourish, hidden from leaders’ line of sight, where decision makers have ‘tunnel vision’ or live in a ‘rose-tinted bubble’ unaware of risk factors that would be valuable information for influencing decisions. The ‘Roads to Ruin’ research studies some strong examples that are indicative of how disregarding the decision-making value of clear risk communication can be disastrous:

“In the cases of Independent Insurance, Enron and AIG, there was poor internal communication about problems because of the hectoring and/or bullying behaviour of the leadership. This blocked internal routes to NEDs becoming aware of what was going on.”

“… In the case of the Airbus A380 delays, middle managers kept the problem of non-matching aircraft sections from senior managers for six months. This seems to have resulted, at least in part, from a culture that did not allow the freedom to criticise – essentially a communication problem…”

being heard through the noise

Why should risk management be important to decision makers anyway? It is generally understood that decision-making (in many settings) is a cognitive process that leads to the selection of a course of action from a set of established alternatives. Published analysis of the business-centric decision-making process reveals a number of stages:

  • Defining the problem
  • Gathering information
  • Identifying and evaluating alternatives
  • Making the actual decision
  • Implementing the decision
  • Following up – considering if it ‘worked’

However, this is rarely an explicit cycle of activities but iterations ‘hidden’ within other management practices undertaken in a decision-making setting. Priorities for senior management are stretched, with executive decision makers encouraged to embrace innovation and entrepreneurial development. In many cases, year on year, they are required to pursue challenging strategic objectives and with this expectation comes the implicit obligation to be diligent on the organisation’s decisions, and both source and utilise the right information.

Finding this right information is a constant challenge, made harder when the party typically relied upon to collect, collate and present the information is different to the decision maker and business area.

Pivotal to overcoming this challenge is to provide the decision maker with alternatives and consequences, and rely on some corporately agreed preferences, rules and guidance. The problem is that models on what these preferences, rules and guidance should look like is plentiful. Companies don’t have to search far to obtain a lot of advice on how to make good decisions or find guidance on what decision-making disciplines actually make a difference. Then add to that all the other noise that has the potential to strongly influence decision-making judgements.

So how do you distil all this noise – these messages and best practices – into something that is manageable and works for you? And through this distilling process, can we identify if risk is really something that could add value?

The results of a McKinsey survey in 2009 provides some comfort here, by categorising all these ‘noisy’ parts of the decision-making information mix into three analytical aims that emphasise that examining risk information should play a crucial role in decision-making analysis:

Look ahead

Pay attention to the risks, examined through a detailed model of the decision at hand

Know what you control. Accept that, unlike external risks that accompany decision making, the analysis, discussion and management of the internal threats lie entirely within the control of the decision maker.

If we overlay information about board oversight found in the Companies Act, which emphasises the director’s duty to regard the likely consequences of any decision, we complete the circle of why organisations must ensure their company’s risk capabilities deliver quality insight into a decision-making setting.

How did we not see it coming?

Risk management has long been lauded as a tool that provides clear, transparent information for the running of an organisation. It is there in black and white in the risk management standards old and new (see Clause 3, ISO31000:2009 Principles and Focused Framework in COSO ERM 2017).

So, if best practice sources agree that risk management is central to corporate success, and if the same sources say risk management is built on the principles of transparency and clarity that are equally as vital in decision-making, why is there still this level of variation, debate and uncertainty?

A survey conducted by Harvard Business Review in 2013 points us towards some of the reasons the statements above still don’t guarantee risk management engagement during the decision-making process at the top of any organisation.

Accepting this is data nearly five years old, it is still worth noting that, at that point in time, 42% of 442 global executives didn’t have confidence in the decisions being made, due to both the lack of access to/availability of data to inform decision-making and the questionable quality of the information.

What this research challenges businesses to address is the need for up-to-date, honest, accurate and relevant data available in real time, backed up by clear analysis at a level that can easily be shared among peers and colleagues in order for them to collaborate on decision-making and instil confidence in the decisions being made.

But surely, once again, the outcome of this research is nothing new. Hasn’t this been the claim from risk practitioners for some time?

Another brief look back to the financial crisis of 2008/2009 and economic conditions since shows us that many companies had (and well before the crisis) risk identification processes in place. Often reported as strong and robust frameworks, they have been used across businesses to ensure the risks facing the company are identified, consolidated and prioritised, and to demonstrate transparency and preparedness. Once a year in company annual reports, organisations generate analysis that lists their key risks and looks to demonstrate how this analysis was fed through to the top of the organisation from the internal process of qualitative and quantitative assessment of the probability and impact of each risk.

So why then did these risk processes not raise relevant alarms to management in the lead-up to the financial crisis? Why were these processes not really delivering information to support how an organisation understands its business and makes strategic decisions suited to protecting the business? I suggest a number of reasons:

  • No big picture Despite trying to focus on principle risks, the bottom-up nature of assessment misses the company-wide risks, with those reporting not seeing the bigger picture.
  • Wrong focus These assessments do not look to the business drivers, reasons and strategies that have led to these risks.
  • Risks affect each other These assessments fail to consider how multiple risks co-exist and multiply.
  • Risk registers don’t do enough These assessments are too focused on risk registers.

As a result, these processes failed to generate the insight the decision makers could act upon.