As artificial intelligence transforms the cyber threat landscape, risk professionals must learn to translate technical risks into boardroom strategy and ensure their organisations are prepared for what’s next. Experts from government, underwriting and cyber security explain how.
Artificial intelligence is advancing rapidly – and so are the threats. From deepfakes to self-learning malware and autonomous attacks, AI is enabling cybercriminals to exploit vulnerabilities faster and with greater precision than ever before.
Speaking at the Airmic conference, Kirsty Kelly, global CISO at CFC, warned that the pace of change is creating new risks that many security teams are still scrambling to understand. ”This is AI generated cyber attacks that are happening now. AI is going to be able to do this in the future at a speed and scale we can’t comprehend yet,” she said.
She described a recent phishing attack in which she was the only person targeted in her company, highlighting how AI can fuel highly personalised, persistent attempts at fraud and manipulation. These attacks are not only more convincing but also more difficult to detect using traditional controls.
In addition to AI-powered phishing and malware, Kelly warned of other growing threats. “Deepfakes are one that really concern me… Deepfake technology gets better by the day… [and] that’s quite a terrifying prospect because it can and will be weaponised,” she said. “The future of cyber warfare is AI against AI.”
From defensive to predictive
To stay ahead, Kelly is collaborating with vendors to implement AI-powered defences capable of anticipating and countering threats.
Automated incident response is one area where AI can deliver significant benefits, allowing security teams to isolate affected systems, begin forensics and respond in real time, even outside working hours. But trust and control are critical.
“I certainly don’t want AI to go off and make a bunch of decisions that I wouldn’t make,” Kelly said. “And I think that’s really important… How is AI behaving and how can we trust it.”
From a policy perspective, John Maguire, head of cyber resilience and incentives at DSIT, said the UK government is working to ensure AI adoption is both innovative and secure. “This government is really prioritising growth, but we really recognise that growth and innovation needs to be safe and secure. It needs to needs to be protected,” he explained.
He highlighted the publication of the UK’s AI cyber security code of practice, which offers guidance to both developers and users. The code includes measures such as cybersecurity training, risk assessment procedures and recovery planning tailored to AI systems. The UK has submitted the code to ETSI as a basis for international standards.
Maguire also emphasised the need for board-level engagement on digital risk. “Digital risk should now be considered as seen as any other sort of core business risk, like you would manage your financial risk or legal risk or anything like that,” he said. He pointed to recent incidents, including a cyberattack on Marks & Spencer, as evidence that these risks carry significant business impacts.
Insurers push for visibility and speed
Tom Draper, UK managing director at Coalition, argued that insurers can play a key role in closing the knowledge gap. “Urgency when it comes to innovation is your ability to learn and move,” he said. “You can’t have innovation without speed.”
Coalition has adopted an ‘active insurance’ model, sending real-time alerts to clients about potential vulnerabilities. “Last year we sent out nearly 47,000 alerts to our customers saying, look, here are the things that we are seeing being hit. And by the way, you have this problem too. Can we help you do something about it?”
For Draper, visibility is essential, both across client systems and in the wider ecosystem. Insurers need to push insights not only to risk and IT teams but also to business decision-makers who may be unaware of how AI changes the risk profile of their operations.
Looking ahead, he believes the insurance sector must do more to support risk prevention. “We need to be pushing further down in terms of helping those teams - and not just the risk, legal and security teams, but the business operations teams as well.”
Meanwhile, with AI evolving faster than most companies can keep up with, risk managers must act as translators – helping boards understand technical threats in business terms and guiding responsible adoption.
“Translation is key,” Draper said. “These are conversations every firm is having internally. It’s just being pushed by different teams. It’s being pushed by marketing, it’s being pushed by customer service, it’s being pushed by operations. Helping that internal translation, I think, is where risk leaders can really add value.”
Boards must also be incentivised to pay attention. As Maguire put it: “Quite often there’s a lack of senior ownership of the risk. Actually, industry has said: we need to understand what good looks like.”
By shaping standards, improving communication and driving AI awareness into decision-making, risk managers can keep their organisations on the front foot.
BLOG: Airmic Conference 2025 Daily News Summary
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
Currently reading
Airmic 2025: AI, cyber risk and trust: how risk managers can keep pace with fast-moving threats
- 10
- 11
- 12
- 13
- 14
- 15
- 16
No comments yet