ChatGPT can’t compete with trained risk management chatbots but it can easily outperform many risk consultants, says Alex Sidorenko, chief risk officer and founder of RISK-ACADEMY

Today I wanted to talk about a hot topic in the risk management industry: AI chatbots.

Let’s look at ChatGPT, probably the most well-known AI model at the moment, and see why it shouldn’t be used in risk management any time soon.

SR_web_Alex Sidorenko

But more importantly, let’s look at alternatives that significantly outperform ChatGPT.

ChatGPT, like many other AI models, is trained on a vast database, filled with an eclectic mix of data. It’s like a giant library, filled with every book you could imagine.

But here’s the problem: not all those books are worth reading, especially when it comes to risk management.

In the field of risk management, where making informed decisions about the future is vital, we can’t afford to lean on a tool that treats all sources of information equally.

Unfortunately, many resources labelled as ‘risk management’ are based on flawed RM1 concepts, while truly insightful and valuable risk management books often don’t even have ‘risk management’ in their title.

“We can’t afford to lean on a tool that treats all sources of information equally.”

This is a big issue for most specialised topics, where common wisdom is often wrong and different from niche opinions and often leads a non-specialiaed AI like ChatGPT to unintentionally propagate ineffective practices, muddying the waters of good risk management advice.

In fact, given the sheer volume of RM1 information, the scales are not in favour of good risk management.

Here’s a perspective from my own experience: I’ve extensively tested ChatGPT, and the advice it consistently delivers is reminiscent of the guidance a junior Big 4 risk consultant might offer.

Though the language and jargon might seem on point, the substance is largely misguided and often outright wrong, anchored in outdated RM1 methodologies.

This isn’t a case of subtle nuances or in-depth understanding - it’s a glaring issue of fundamentally incorrect advice that, if followed, can lead to damaging consequences for your organisation.

“I firmly believe in AI’s potential to revolutionise the field of risk management”

Now, don’t get me wrong. I’m not against ChatGPT. In fact, I firmly believe in AI’s potential to revolutionise the field of risk management. The key, however, lies in the training.

A standalone chatbot fed on a diet of best risk management practices, decision science, probability theory and neuroscience is, in my opinion, an invaluable tool.

This brings me to my next point. At RISK-ACADEMY, we have taken this approach, creating not one, but three free-to-use and specialised AI chatbots. We call them the RAW Trio – RAW@blog, RAW@YT, and RAW@PD.

Each of these chatbots is trained on a specific type of content from RISK-ACADEMY.

The RAW@blog has digested our articles, books and guidelines, RAW@YT has watched and learned from our videos, including past RAW workshops, and RAW@PD has been schooled in probability distributions from the Vose’s library.

With their specialised training, they provide sound, reliable advice, unlike generic chatbots trained on a mix of the good, the bad, and the ugly. Most of the time at least.

Let’s ensure we’re placing our future in capable hands, or in this case, capable algorithms.

ChatGPT can’t compete with trained risk management chatbots but it can easily outperform many risk consultants, says Alex Sidorenko