For years UK listed companies have been holding risk workshops...

For years UK listed companies have been holding risk workshops and updating risk registers to comply with what is now the Turnbull guidance on internal control. But are these activities effective? asks Matthew Leitch

A glance through most risk registers will reveal technical flaws ranging from trivial weaknesses to fundamental mistakes. Despite a general acceptance of commonly found techniques, in most cases their effectiveness has never been tested. Regulations have simply required companies to describe their procedures in their annual reports, and external auditors merely to sayif the description is accurate. Effectiveness has not been commented on.

However, in future, companies that are SEC registrants are likely to find their auditors taking a very different line. Section 404 of the Sarbanes-Oxley Act 2002 has led to new external audit requirements that will come into force for US companies for year ends on or after 15 June 2004 and for UK companies on or after 15 April 2005. Companies will be required to make a public statement about the effectiveness of their controls for financial reporting, and external auditors will be required to attest to that statement.

The evaluation of effectiveness is likely to include the top level risk and control assessments used for Turnbull compliance.

Some common flaws
The most common flaw is fundamental to the risk ratings in most risk registers. It results in impact ratings that are illogical and, therefore, bogus. Most risk registers are tables, and each row of the table shows a risk. Each risk is rated, and the most common technique is to make independent ratings of likelihood and of impact.

Unfortunately, most of the risks on risk registers are actually collections of risks, with variable impacts. Although it is logical to consider the likelihood of one or more of those risks arising, it is not logical to talk about the impact. For example, what is the impact of 'risk of loss of market share'? Clearly it depends on how much market share is lost. The risk is in fact an infinite set of risks, each representing a different degree of market share loss.

The same error applies to risk register items that are events that might happen more than once during the period under consideration, or lists of risks, or general headings such as 'health and safety'.

Most risk ratings are subjective, which is not itself a flaw if the only practical alternative is to have no ratings at all. However, although research on subjective probability assessment procedures emphasises the importance of clearly defining what is to be estimated, many companies know nothing about this research. It is rarely clear what is included in the item, for what time horizon, or assuming what level of risk mitigation.

Most people know that rating risks as high, medium, or low is controversial, because there is no link to an absolute measure. One consequence of such ratings is that it is hard to compare risks between business units in a group. Each unit will rate its risks relative to one another, but when they are pooled with risks from other units it is hard to make comparisons. Some units may have submitted risks as 'high', which are trivial at group level.

Another consequence is that personalities can have a dramatic effect on results. Some people tend to give extreme answers, while others prefer to say 'medium'. If the results from more than one unit are pooled and ranked, ratings by extremists dominate the list and get most attention. This can be adjusted by statistically normalising the distribution of answers, but it is not ideal.

Other flaws tend to lead to incomplete risk registers. There are some risks that people do not like to volunteer, and procedures that rely on people to volunteer risks will tend to miss those that would be critical of senior management (or other feared groups), or that might mean a manager is made the owner of a risk that is extremely difficult to deal with.

Missing the point
In the Turnbull guidance, risk-control mapping is seen as a way to evaluate the effectiveness of internal controls, and as a part of the internal control system itself that should operate to design or adjust the control system to meet new risks and other requirements.

The overall question is whether a company's risk procedures effectively do what is required under the Turnbull guidance. The answer will be negative unless the mapping gets down to a sufficiently detailed level that action can be taken.

Finding and fixing flaws
The first step is to look at risk registers and related procedures and see what flaws are evident. However, just because a flaw is not evident in the documentation does not mean it has been avoided. To get more information, talk to people who have contributed to the risk register to find out what left them feeling uncomfortable or confused. Weaknesses need to be corrected, and it may also be worthwhile discussing the improvements with external auditors.

Matthew Leitch is an independent consultant specialising in internal control systems, Tel: 01372 805 267, E-mail: matthew@internalcontrolsdesign.co.uk