Easy to produce, hard to stop. Risk managers must monitor and be prepared to act quickly when fake news claims target their business

In November last year, a nine-word tweet from a random X account, formerly Twitter, erased over $15 billion in market cap from the stock of pharmaceutical giant Eli Lilly and Co. 

“We are excited to announce insulin is free now,” announced the tweet from an account complete with the firm’s logo and a “verified” check mark.

iStock-838881276 fake news

The user, who had no affiliation with the company, had paid $8 for the verification during the early days of X’s new blue tick system, which has since been tightened to avoid such incidents.

Fast food chains are often victims of claims which rapidly spread online, such as rodents and bugs in their food, which are often exposed as untrue.

However, such food safety scandals impact businesses and share prices. Just last month it was claimed that McDonald’s French fries come from Bill Gates and a GM potato farm (a claim since debunked).

Fake news — false or misleading information presented as news — can be cheaply and easily created, spreads rapidly and can quickly impact the profitability and value of organisations.

Risk managers therefore need to know how to respond to such events, manage reputations and get the business back on track.

The changing face of fake news

Over five years ago, Francois de Hennin, a risk consultant and former global chief risk officer of GroupM, wrote a prophetic piece for StrategicRISK titled ‘How can I manage fake news risk?’ on the scale of the threat that fake news poses.

After all that time, what’s changed?

“The speed at which information, including or maybe particularly fake news, spreads has increased dramatically in the last five years,” said de Hennin.

“The most important social platforms claim to want to address fake news, but none of this appears to have had a significant impact”

“There is more fake news, published more often and the general public, although informed about their existence and prevalence, does not in its majority question news which agrees with their own views or preconceptions.”

He added that there seems to be a widespread view that opinions and ideas are as valid as facts, which contributes to fake news generation.

“Laws and rules have been put in place in many countries, and the most important social platforms claim to want to address fake news, but none of this appears to have had a significant impact,” said de Hennin.

Monitoring fake news

“Monitoring social media is a must,” said de Hennin. “It should not be limited to spotting fake news affecting or involving the organisation or the industry or sector it belongs to, but it has to include detailed analysis of internet chatter, its themes, its origin, and so on.

“This might allow organisations to anticipate where the next fake news could come from and what it could be about. This in turn allows organisations to initiate preventive measures and to prepare counter measures which can be actioned immediately when necessary.”

Analysis of trends should not be left to junior staff but managed at the top seniority level within the organisation.

“Monitoring social media is a must” 

“Thinking about and preparing countermeasures offers the added benefit of training the organisation and preparing it for reacting quickly in times of crisis,” said de Hennin.

Ross Tapsell, a researcher at the Australian National University’s College of Asia and the Pacific, specialising in Southeast Asian media, said that social media data can be collected and used for monitoring purposes.

“Newer platforms like TikTok make this challenging given video content is growing, but monitoring involves understanding how various algorithms within the platforms work.”

Managing fake news risk

When it comes to the practical management of fake news risk, Tapsell said it can be broadly approached by supporting the sustainability of professional journalism and helping build a healthier information ecology worldwide.

“Further, by pressuring tech platforms to show greater responsibility in hiring moderators, especially in languages other than English. Specifically, by ensuring they have timely responses to any fake news being distributed about them. In other words, hiring digital labour to counter the growing digital labour of fake news,” said Tapsell.

De Hennin said that his solutions for StrategicRISK previously remain valid, but there are three further elements to consider around training, testing and auditing.

“On training, nowadays it should include training on how to use ChatGPT and similar systems, what they are useful for and where they can, and do, go wrong.”

“To overcome the potential lack of thorough review of the output of systems like ChatGPT, external editors might be an affordable and effective solution”

“Reminding staff in particular that if something is incorrect or wrongly presented in any communication, it is the person releasing the information who is responsible, not the system which produced it, and that blaming that system will not transfer the responsibility and the liability to it.”

On testing, de Hennin said it is not sufficient to have processes in place, they must be tested regularly, both by internal and external resources.

“To overcome the potential lack of thorough review of the output of systems like ChatGPT, external editors might be an affordable and effective solution. Editors’ job is to revise and edit documents, checking consistency of tone and wording, and aiming at improving readability of texts.

“As the first independent readers of the system’s output they would offer the reassurance of a human control for communication destined to humans,” said de Hennin.

Finally, whether or not organisations request their external auditors to review the measures they put in place to pre-empt or deal with fake news, the auditors could, if necessary, help them design their internal auditing programme to cover this issue.