There is a feeling among risk practitioners that theoretical risk management has strayed from our intuition of the world in which we manage risk daily


Risk management and measuring problems

Only part of the story

Chaos and risk


Risk management

Risk leadership

The Butterfly Effect

Historically, risk management has developed from the numerical disciplines dominated by a preoccupation with statistics (insurance, accountancy, engineering, and so on). This has led to a bias towards the numerical in the world of management of risks.

It comes as no surprise if we look at the historical roots of this newly emerging discipline. Risk management as a science really took offin the 20th century. It still tended to be dominated, however, by the worlds of mathematics and engineering.

In 1921, Frank Knight, in Risk, Uncertainty and Profit, distinguished between three types of probability, which he termed “a priori probability”, “statistical probability”, and “estimates”. The standard example of the first type is the odds of rolling any number on a die. The probability of occurrence is known specifically, that is, if there are mutually exclusive and exhaustive events and if they are equally likely, the probability of a given event occurring is 1/n; for a six-sided dice n=6, and the probability of throwing any single number becomes 1/6.

Statistical probability identifies probability with relative frequency over a long series of events or the proportion of an event in a large population. In this case, risk practitioners need to have observed enough relevant data to make forward predictions. When there is no valid basis for classifying instances, however, only estimates can be made. In this final case, the use of statistical analysis would be meaningless.

Most risk management practised today focuses predominantly on the first two types of probability, namely either that the outcomes are known definitively, or that there is an underlying number or ‘truth’ that can be found merely by further data analysis and interpolation.

This type of uncertainty is termed epistemic. It is due to a lack of knowledge about the behaviour of the system. The epistemic uncertainty can, in principle, be eliminated with sufficient study and, thus, expert judgments may be useful in its reduction.

Alongside the mathematical development in the 1950s, a new type of scientific management was emerging: project management. This consisted of the development of formal tools and techniques to help manage large complex projects that were considered uncertain or risky. It was dominated by the construction and engineering industries, with companies such as Du Pont developing critical path analysis and RAND Corp developing programme evaluation and review technique techniques.

Following on the heels of these early project management techniques, institutions began to be formed in the 1970s as repositories for these developing methodologies.

In 1969, the American Project Management Institute (PMI) was founded. In 2009, the organisation had more than 420,000 members, with 250 chapters in more than 171 countries. It was followed in 1975 by the UK Association of Project Managers (changed to theAssociation for Project Management in 1999) with its own set of methodologies.

To explicitly capture and codify the processes by which they believed projects should be managed, they developed qualifications and guidelines to support them. But while the worlds of physics, mathematics, economics and science have moved on beyond Newtonian methods to a more behavioural understanding, the so-called new sciences, led by eminent scholars in the field such as Albert Einstein, Edward Lorenz and Richard Feynman, project and risk management appears largely to have remained stuck to the principles of the 1950s.

Risk management and measuring problems

The general perception among most project and risk managers that the future can somehow be controlled is one of the most ill-conceived in risk management. At least two advances have been made in the right direction, however. First, we now have a better understanding about the likelihood of unpleasant surprises and, more importantly, we are learning how to recognise their occurrence early on and, subsequently, to manage the consequences when they do occur.

The biggest problem facing us is how to measure all these risks in terms of their potential likelihood, their possible consequences, their correlation and the public’s perception of them. Most organisations measure different risks using different tools.

They use engineering estimates for property exposures, leading to maximum foreseeable loss and probable maximum loss. Actuarial projections are employed for expected loss levels where sufficient loss data is available. Scenario analyses and Monte Carlo simulations are used when data is thin, especially to answer ‘how much should I apply questions?’

Probabilistic and quantitative risk assessments are used for toxicity estimates for drugs and chemicals, and to support public policy decisions.

For political risks, managers rely on qualitative analyses by experts. When it comes to financial risks (credit, currency, interest rate and market), we are inundated with Greek letters (betas, thetas, and so on), and complex econometric models that are comprehensible only to the trained and initiated. The quantitative tools are often too abstract for laymen, whereas the qualitative tools lack mathematical rigour.

Organisations need a combination of both tools, so they can deliver sensible and practical assessments of their risks to their stakeholders. Finally, it is important to remember that the result of quantitative risk assessment development should be continuously checked against one’s own intuition about what constitutes reasonable qualitative behaviour.

When such a check reveals disagreement, the following possibilities must be considered:

• A mistake has been made in the formal mathematical development

• The starting assumptions are incorrect and/or constitute too drastic oversimplification

• One’s own intuition about the field is inadequately developed

• A penetrating new principle has been discovered.

Only part of the story

One of the first areas to be investigated is whether the current single classification of projects is a correct assumption. The general view at present appears to treat them as linear, deterministic predictable systems, where a complex system or problem can be reduced into simple forms for the purpose of analysis. It is then believed that the analysis of those individual parts will give an accurate insight into the working of the entire system.

The strongly held feeling is that science will explain everything. The use of Gant charts, with their critical paths and quantitative risk models with their corresponding risk correlations, would support this view. This type of problem that can be termed ‘tame’ appears to be only part of the story when it comes to defining our projects, however.

Tame problems are those that have straight-forward simple linear causal relationships and can be solved by analytical methods, sometimes called the ‘cascade’ or ‘waterfall’ method. Here, lessons can be learnt from past events and behaviours and applied to future problems, so that best practices and procedures can be identified.

In contrast, ‘messes’ have high levels of system complexity, and are clusters of interrelated or interdependent problems. The elements of the system are normally simple, where the complexity lies in the nature of the interaction of its elements. Their principle characteristic is that they cannot be solved in isolation, but need to be considered holistically. The solutions lie in the realm of systems thinking.

Project management has introduced the concepts of programme and portfolio management to attempt to deal with this type of complexity and address the issues of interdependencies. Using strategies for dealing with messes is fine, as long as most of us share an overriding social theory or social ethic; if we don’t, we face ‘wickedness’.

Wicked problems are ‘divergent’, as opposed to ‘convergent’ problems. Wicked problems are characterised by high levels of behavioural complexity. What confuses real decision-making is that behavioural and dynamic complexities co-exist and interact in what we call wicked messes.

Dynamic complexity requires high level conceptual and systems thinking skills; behavioural complexity requires high levels of relationship and facilitative skills. The fact that problems cannot be solved in isolation from one another makes it even more difficult to deal with people’s differing assumptions and values.

People who think differently must learn about and create a common reality, one that none of them initially understands adequately. The main thrust to the resolution of these types of problems is stakeholder participation and ‘satisficing’.

Many risk planning and forecasting exercises are still being undertaken on the basis of tame problems that assume the variables on which they are based are few, that they are fully understood and able to be controlled. But uncertainties in the economy, politics and society have become so great as to render counter-productive, if not futile, this kind of risk management that many projects and organisations still practise.

Chaos and risk

At best, projects should be considered as deterministic chaotic systems rather than tame problems. This is not using the term ‘chaos’ as defined in the English language that tends to be associated with absolute randomness and anarchy (Oxford English Dictionary describes chaos as “complete disorder and confusion”), but based on the Chaos Theory that was developed in the 1960s.

This theory showed that systems that have a degree of feedback incorporated in them, that have tiny differences in input, could produce overwhelming differences in output (see box below.

Here, chaos is defined as aperiodic (never repeating) banded dynamics (a finite range) of a deterministic system (definite rules) that is sensitive on initial conditions. This appears to describe projects much better than the linear deterministic and predictable view in which both randomness and order could exist simultaneously within those systems.

The characteristics of these types of problems are that they are not held in equilibrium either among its parts or with its environment, and are far from being held in equilibrium; the system operates ‘at the edge of chaos’, where small changes in input can cause the project to either settle into a pattern or just as easily veer into total discord.

For those who are sceptical, consider the failing project that receives new leadership: it can just as easily move into abject failure as settle into successful delivery, and at the outset, we cannot predict with any certainty which one will prevail. At worst, they are wicked messes.


How should the risk professional exist in this world of future uncertainty? Not by returning to a reliance on quantitative assessments statistics and determinism where none exists.

We need to embrace its complexities and understand the type of problem we face before deploying our armoury of tools and techniques to uncover a solution, be they the application of quantitative data or qualitative estimates.

To address risk in the future tense, we need to develop the concept of ‘risk leadership’, which consists of:

• Guiding, rather than prescribing
• Adapting, rather than formalising
• Learning to live with complexity, rather than simplifying
• Inclusion, rather than exclusion, and
• Leading, rather than managing.

The implications of the new concept of risk leadership are described on the previous page.

What does this all mean? At the least, it means we must apply a new approach for risk management for problems that are not tame. We should look to enhance our understanding of the behavioural aspects of the profession and move away from a blind application of process and generic standards towards an informed implementation of guidance.

What we need to develop are great risk leaders who realise that understanding risk is more of an art than a science, that this truly is the best time to be alive and working in risk, and that perhaps almost everything we thought we knew may turn out to be wrong.

Risk management

  • Works to a defined scope, budget, quality and programme
  • Uses the instrumental lifecycle image of risk management as a linear sequence of tasks to be performed on an objective entity using knowledge and procedures
  • Manages process to ensure complicated projects of people and technology run smoothly
  • Establishes detailed steps, processes and timetables
  • Applies concepts and methodologies that focus on risk management for creation or improvement of a product, system or facility, and so on, monitored and controlled against specification (quality), cost and time
  • Attempts to control risk by monitoring results, identifying deviations from the plan and developing mitigation actions to return to plan
  • Works on the assumption that the risk model is the actual ‘terrain’ (that is, the actual reality ‘out there’ in the world)
  • Implementer of the risk process. Training and development produces practioners who can follow detailed procedures and techniques
  • Seeks predictability and order.

Risk leadership

  • Recognises the possibility of different outcomes and tries to ensure risk activities focus on making an acceptable outcome more likely
  • Uses concepts and images that focus on social interaction among people, understanding the flux of events and human interaction, and the framing of projects within an array of social agenda, practices, stakeholder relations, politics and power
  • Develops team behaviours and confidence through scenario planning and team building to identify and respond to risks and opportunities
  • Understands the ‘many acceptable futures’ proposition and manages risk to produce changes needed to achieve acceptable result
  • Applies concepts and frameworks that focus on risk management as value creation, while aware that ‘value’ and ‘benefit’ will have multiple meanings linked to different purposes
  • Adapts the risk process to overcome political, bureaucratic and resource barriers to developing change in behaviours through trust and managing expectations
  • Is based on the development of new risk models and theories that recognise the complexity of risk and its management and that the model is one part of a complex ‘terrain’
  • Is a reflective listener: learning and development facilitates the development of reflective practitioners who can learn, operate and adapt effectively in complex environments
  • Has learnt to live with chaos, complexity and uncertainty and leads by example to a successful result.


The Butterfly Effect

In 1961, while working on long-range weather prediction, Edward Lorenz made a startling discovery. While working on a particular weather run, rather than starting the second run from the beginning, he started it part-way through using the figures from the first run.

This should have produced an identical result, but he found that it started to diverge rapidly until after a few months it bore no resemblance to the first run.

At first he thought he had entered the numbers incorrectly. But this turned out to be far from the case: what he had actually done was round the figures, and instead of using the output of six decimal places he had used only three (.506 instead of .506127).

This gave rise to the idea that a butterfly could could alter the path of, delay or stop a tornado

He had considered the difference of one part in 1,000 inconsequential, especially as a weather satellite being able to read to this level of accuracy was considered unusual. But this slight difference had caused a massive variation in the result.

This gave rise to the idea that a butterfly could produce small undetectable changes in pressure that would be considered in the model, and this difference could result in altering the path of, delaying or stopping a tornado.

Dr David Hancock MBA is head of risk, benefits and planning at Transport for London. His book, Messy and Wicked Risk Leadership, is published by Gower