As deepfake attacks surge, businesses must urgently act to protect themselves. Here’s what you need to know

Businesses must shore up their defences following a significant spike in “Deepfake” criminal attacks in the first quarter of the year.

Analysis from SurfShark found there were 179 deepfake incidents reported in the first quarter of 2025, a rise of a 19% from the number of attacks reported throughout the entirety of 2024.

Deepfake technology

From 2017 to 2022, only 22 incidents were recorded. In 2023, this number nearly doubled to 42 incidents. By 2024, incidents increased by 257% to 150.

Since 2017, the most popular format for deepfake incidents is video, with 260 reported cases. The second most popular format is image, with 132 incidents reported. The last category is audio, which recorded 117 incidents.

While celebrities and politicians can be the subjects of deepfakes there is a growing use of the technology to attempt to fraudulently obtain money from businesses and individuals by purporting to be with a member of senior management asking staff to transfer funds or take action which could leave the business vulnerable.

Thomas Stamulis, chief security officer at SurfShark told StrategicRISK: “Deepfakes pose a growing threat to businesses. With the ability to realistically mimic voices and faces, attackers can impersonate executives to authorise fake transactions or issue false instructions - especially dangerous in remote work settings.

“Deepfake videos can also be used to falsely depict a company engaging in harmful behaviour, damaging reputation or even influencing stock prices. In some cases, fake public announcements featuring a CEO’s face and voice can be used to spread disinformation, causing panic or confusion among stakeholders.”

He continued: “Deepfake technology is advancing at an alarming rate, and with it, the capacity for misinformation and malicious intent grows. The potential for harm ranges from tarnished personal reputation to threatened national security. People have to be cautious, as losing trust in the information we hear and see can significantly impact personal privacy, institutions, and even democracy.”

KPMG has issued guidance to clients over the threat posed by deepfakes.

It said criminals or other malicious actors can use deepfakes in a number of ways that are potentially damaging, amplifying the costs of fraud, regulatory fines, and data breaches, and eroding trust in brand integrity:

  1. Fraudulent financial transactions

Cybercriminals could use deepfakes to impersonate senior executives during phone calls or video conferences (sometimes called “vishing”), convincing others that they carry authority. They could then acquire confidential information or even persuade individuals to transfer significant funds.

Insurance companies could be targeted with deepfake generated images submitted with claims. With more companies moving toward automated claims processes, removing the human claims adjuster, such images may not come under as much scrutiny.

And customers can be targeted by deepfakes posing as official company representatives and surrender personal financial details or even make payments to criminals.

  1. Disinformation

Deepfakes videos or audio recordings can spread fraudulent or false and defamatory information about individuals and organisations, which could damage stakeholder, customer, and wider public trust.

In a social media age, such content can go viral in seconds. For instance, by circulating deepfakes of executives announcing a company’s financial status, upcoming mergers, or product launches and marketing materials – or making derogatory remarks, or inaccurate political statements – criminals could profit from subsequent fluctuations in share prices, as well harming a company’s reputation.

Such tactics could also be used by competitors to cause stock price volatility and deter investors – as well as by other nation states to undermine the economy. Similarly, malicious actors may try to damage companies’ reputation by spreading deepfakes about environmental harm, poor labour practices, faulty, dangerous products, or inappropriate behaviour from executives.

  1. Enhanced social engineering attacks.

By using deepfakes, bad actors can penetrate organisations by, for example, impersonating a Chief Technology Officer (CTO) to persuade staff to grant access to a core technology system – to steal confidential information or plant malware. This might be achieved through targeted “spear phishing” emails with a deepfake video attached.

  1. Other deepfake risks

Many companies are also vulnerable to extortion from AI fabricated incriminating content, brand misuse, potentially leading to legal liabilities, fines, loss of trust and business. Remote hiring practices could open the door for either criminals or under-qualified candidates, using deepfakes to give synthetic identities a convincing face and voice – even going so far as to conduct interviews.

How to manage the threats

The rise of deepfakes exploits our natural tendency to trust visual and auditory content, posing significant risks. It’s therefore critical for organisations to enforce robust programmess to help ensure safe and ethical AI use and secure deployment.

Bryan McGowa, global trusted AI lead, KPMG International explained: “Implementing a zero-trust architecture is equally essential, embedding verification across all operations with stringent controls and cutting-edge technology. This strategic approach is vital for maintaining integrity and trust in the digital age.”

KPMG added: “Deepfakes may be growing in sophistication and appear to be a daunting threat. However, by integrating deepfakes into the company’s cybersecurity and risk management, CISOs - with the assistance of General Counsel, the CEO, and Chief Risk Officers (CRO) – can help their companies stay one step ahead of malicious actors.

“This calls for a broad understanding across the organisation of the risks of deepfakes, and the need for an appropriate budget to combat this threat. A combination of detection technology and processes, a cross-functional approach (involving the CISO’s team, Legal, PR and other functions), and well-informed employees, should enable cybersecurity professionals to spot potential and actual attacks, and act fast to limit the damage.

“Remember, the same technology that is being used to infiltrate an organisation can also protect it. Collaborating with deepfake cybersecurity specialists helps spread knowledge and continually test and improve controls and defences, to avoid fraud, data loss and reputational damage.”

When it comes to identifying a deepfake SurfShark said due to their widespread distribution and enhanced realism, detecting deepfakes is becoming progressively more difficult. Technology that generates deepfakes often outpaces the capabilities of detection tools. The significant amount of this type of content online also complicates distinguishing between what is genuine and what is not. However, there are some things you can look out for to detect a deepfake, including:

  • Unnatural movements;
  • Colour differences;
  • Inconsistent lightning, as well as unmatching reflections in each eye and unnatural corneas;
  • Poor lip-sync (audio doesn’t match lip movements);
  • Blurry or distorted backgrounds;
  • Distribution channels (it can be shared by a bot).

Stamulis concluded: “To protect themselves, companies should train employees to recognise social engineering tactics, use only official communication channels, remain alert for signs of manipulated media, and implement clear protocols for verifying sensitive requests.”