In 2019, an employee at the UK subsidiary of a German energy company received a phone call from someone who sounded exactly like his CEO. The voice had the right accent, the right cadence, the right tone of authority. It directed him to transfer €220,000 to a supplier account in Hungary as a matter of urgency. He did. The CEO had never made the call. The voice was generated by AI.
That case, one of the first documented deepfake fraud attacks against a UK business, took place when the technology was still relatively crude. In 2026, the tools are free, widely available, and capable of cloning a convincing voice from as little as three seconds of audio. Video deepfakes that place a realistic-looking executive on a live call are no longer the preserve of well-funded criminal organisations. They are accessible to anyone willing to spend a few hours learning how to use them.
This guide explains what deepfake attacks actually are, how they work in practice, what the real risk is for a UK SME, and the specific steps your business can take to reduce its exposure without a large technology investment.
The scale of the problem in the UK
Deepfake fraud is no longer a distant threat or an enterprise-only concern. The numbers for the UK are specific and they are significant.
The National Crime Agency's 2025 fraud assessment confirmed that organised crime groups are using deepfake video and voice cloning to carry out payment diversion fraud against UK businesses, and that adoption of these techniques is increasing. The NCA notes that criminals are using AI to enhance existing fraud methods rather than create new ones, which means the attacks look familiar on the surface while being significantly harder to detect.
Only 13 per cent of companies have any anti-deepfake protocols in place. The gap between the scale of the threat and the preparedness of most businesses is wide.
What a deepfake actually is
A deepfake is a piece of audio, video or image content that has been manipulated using AI to make it appear that a real person said or did something they did not. The word combines deep learning, the AI technique used to generate the content, with fake.
For the purposes of business fraud, there are three types that matter:
Voice cloning
The most accessible and most commonly used form. AI tools can clone a convincing version of someone's voice using just three to five seconds of source audio. That audio can come from a YouTube video, a podcast appearance, a company webinar, a LinkedIn post with audio, or a voicemail. The clone can then be used in real-time phone calls or pre-recorded messages. Voice quality has improved to the point where 70 per cent of people, in a McAfee study, said they could not reliably distinguish a cloned voice from the real person.
Video deepfakes
More technically demanding but increasingly accessible. The AI requires more source footage of the target, typically from public videos, to learn facial movements, expressions and lighting. Once trained, it can generate a realistic-looking video of the person saying or doing things they never did, or place a fake face over a real person in a live video call. The Arup case in 2024 involved a fake CFO and multiple fake colleagues appearing simultaneously on what the employee believed was a routine video call. Every participant was AI-generated.
Document and image deepfakes
AI-generated fake invoices, identity documents, contracts and payment instructions. These are used in fraud attempts that do not require real-time impersonation: a fake invoice from a supplier your business recognises, with altered bank details, or a fake identity document submitted during an onboarding process.
How deepfake attacks target UK businesses
The three attack patterns that account for the majority of business-targeted deepfake fraud are worth understanding in detail, because each requires a different protective response.
CEO fraud and payment diversion
This is the most financially damaging pattern. An attacker clones the voice or video of a senior executive, typically the CEO or CFO, and contacts someone in the finance team with an urgent request to transfer funds. The request is usually framed around a confidential acquisition, a supplier payment that needs to be made immediately, or a regulatory obligation. The urgency and the apparent authority of the caller are designed to bypass normal verification processes.
The sophistication of these attacks has increased significantly. Early versions relied on audio only and were sometimes detectable through call quality issues. Current attacks use video calls with AI-generated faces, involve multiple fake participants to add credibility, and sometimes include follow-up emails that appear to come from the executive's genuine address. In the Singapore case in March 2025, attackers went so far as to suggest a video call themselves, knowing that the finance director might request one to verify an unusual transfer request. The video call provided false reassurance rather than genuine verification.
Supplier impersonation and invoice fraud
Attackers use voice cloning or AI-generated communications to impersonate a genuine supplier, informing the accounts payable team that bank details have changed and requesting that future payments be directed to a new account. This attack is particularly effective because it mirrors a legitimate business process, arriving via a channel the recipient trusts, and does not require the same level of technical sophistication as a live video call.
Internal impersonation and credential theft
Deepfakes are also used to impersonate IT staff, HR managers or senior leadership in internal communications, requesting password resets, access credentials, or changes to security settings. A fake voicemail or video message from someone who appears to be the IT director asking a member of staff to reset their multi-factor authentication is difficult to verify without a specific process for doing so.
Why SMEs are not immune
A common assumption is that deepfake attacks target large organisations because the potential fraud amounts are higher. This is partly true: the largest documented losses involve major corporations. But SMEs are targeted for different reasons that make them, in some ways, more vulnerable.
Smaller businesses typically have fewer people with authority over financial transactions, which means a single successful impersonation can unlock access to significant funds without needing to compromise multiple individuals. Decision-making is often less formalised, which means staff are more likely to act on a direct request from a senior person without going through a documented approval process. And the absence of dedicated security functions means there is no one whose job it is to assess and respond to these threats.
The tools used to create deepfakes are now free or very cheap, which removes the financial barrier for attackers targeting businesses of any size. A business with a £2 million annual turnover and a finance manager who has heard the MD's voice on a company video is as viable a target as a large enterprise, if the right process controls are not in place.
How to protect your business: the practical steps
The good news is that the most effective defences against deepfake attacks are procedural rather than technological. They do not require expensive software or specialist security teams. They require clear rules, communicated to the right people, and consistently followed.
Establish verbal code words for high-value requests
Agree a pre-shared code word or phrase with your finance team and any other staff who can authorise payments or change account details. This word should be known only internally, never published or referenced publicly, and required before any payment instruction received by phone or video call is acted upon. If a caller cannot provide the code word, the request is not acted upon, regardless of how convincing the voice or face appears.
This is the single most effective procedural control against voice and video deepfake attacks. It costs nothing to implement and it works because deepfakes, however convincing they look and sound, cannot know a secret that has never been made public.
Introduce a call-back rule for payment instructions
Any instruction to transfer funds, change bank details or authorise an unusual payment should trigger a call-back to the person making the request, using a phone number already on record in your systems rather than any number provided during the original communication. This breaks the attack chain: even a convincing deepfake call cannot intercept a call-back made to a number your business already holds.
Require dual authorisation for significant transfers
No single person should have unilateral authority to authorise a significant payment based on a verbal or video instruction alone. A second authorisation from a different person, through a different channel, is the procedural backstop that catches attacks that get past the initial contact point. The threshold for what counts as significant should be set at a level appropriate to your business, but it should exist.
Treat urgency as a warning sign
Deepfake attacks, like all social engineering fraud, rely on urgency to prevent the target from following their normal verification process. A request that needs to be acted on immediately, before end of day, before anyone else finds out, or before a regulatory deadline, is a request that should trigger more caution rather than less. Train your finance team and anyone who handles payments to treat urgency as a reason to slow down rather than speed up.
Audit your executives' public digital footprint
The source material for voice and video deepfakes comes from publicly available content: company videos, podcast appearances, webinars, social media clips, recorded conference presentations. This does not mean executives should avoid all public activity, but it is worth being aware of what exists and ensuring that any video or audio published externally is appropriate for potential misuse. Some organisations choose to watermark or register public video content as an additional measure.
Apply multi-factor authentication across financial systems
MFA does not prevent a deepfake attack on a human, but it prevents an attacker who obtains credentials through a deepfake social engineering attack from using those credentials to access financial systems independently. For any system that can initiate payments, change bank details or access sensitive financial data, MFA should be mandatory and consistently enforced across all users.
Train your team, regularly and specifically
General cyber security awareness training helps, but it is not enough on its own. Your finance team and anyone with authority over payments needs specific training on deepfake fraud: what it looks and sounds like, what the common attack patterns are, and exactly what to do if they receive a suspicious request. This training should be updated regularly, because the tactics evolve quickly, and it should include examples that reflect the current state of the technology rather than descriptions that are already out of date.
Many UK SMEs carry cyber insurance that includes social engineering and fraud coverage. It is worth reviewing your policy to understand whether deepfake-enabled payment diversion is covered, what documentation is required to make a claim, and whether your current procedural controls satisfy the policy conditions. Some insurers are beginning to require specific controls, including dual authorisation for payments and documented verification procedures, as conditions of coverage. If your policy does not yet address deepfake fraud specifically, it is worth raising with your broker at renewal.
Technology tools: useful but not sufficient
There is a growing market in deepfake detection software, tools that analyse audio or video for the artefacts of AI generation. Some of these are genuinely useful as one layer of a broader defence. But they are not reliable enough to be your primary protection, for two reasons.
First, detection accuracy degrades against the most sophisticated modern deepfakes. Tools that claim high accuracy in controlled conditions often perform less well against adversarial attacks that are specifically designed to evade detection. Second, detection tools require someone to decide to run a check, which assumes the target is already suspicious. The most effective deepfake attacks are those where the target never thinks to question what they are seeing or hearing.
The most reliable protection is procedural: controls that kick in automatically regardless of whether anyone suspects an attack. Detection tools are a useful supplement to those controls, not a replacement for them.
What Advantage recommends for UK SMEs
Advantage's cyber security advisory work with UK SMEs consistently identifies the same gap: businesses that have some cyber security measures in place but have not yet reviewed their internal processes for payment authorisation and identity verification in the context of AI-enabled fraud.
The immediate priority for any UK SME is not to buy new software. It is to:
- Define and communicate a code word system for high-value payment authorisation
- Document a call-back procedure for any payment instruction received outside normal channels
- Set a dual authorisation threshold for significant transfers
- Run a specific briefing with your finance team on deepfake fraud, covering what it is and what to do
- Review MFA coverage across all financial systems and fix any gaps
These five steps can be completed without specialist technology and without significant cost. They address the most common attack vectors and they work because they remove the human single point of failure that deepfake attacks exploit.
For businesses that want to go further, Advantage can assess your current cyber security posture, identify specific vulnerabilities in your payment and identity verification processes, and design a programme of improvements that fits your business size and risk profile. This work sits within our broader cyber security and managed IT services, and it starts with a conversation rather than a commitment.
Concerned about your business's exposure to deepfake fraud?
Advantage Business Systems provides cyber security advisory and managed IT services for UK SMEs, including assessment of social engineering risks, payment authorisation process design, and staff awareness training. We offer a free initial conversation to understand your current position and identify the most pressing gaps.
Get in touch at advantage.co.uk or call us on 020 3004 4600.
Frequently asked questions
Common questions about deepfake attacks, how they work, and what UK businesses can do to protect themselves.
What is a deepfake attack?
A deepfake attack uses AI-generated audio, video or images to impersonate a real person, typically a senior executive or trusted contact, in order to deceive employees into transferring money, sharing credentials or changing account details. The most common form targeting UK businesses is voice cloning, where AI replicates an executive's voice from a few seconds of publicly available audio and uses it in a fraudulent phone call. Video deepfakes, where a realistic-looking face is generated or overlaid on a video call, are increasingly used in higher-value attacks. Deepfake fraud attempts against UK businesses rose 94 per cent in 2025.
How do criminals create a deepfake of someone's voice?
Modern AI voice-cloning tools can produce a convincing replica of a person's voice from as little as three to five seconds of clear source audio. That audio can come from any publicly available recording: a company video, a podcast, a webinar recording, a LinkedIn video post, or a recorded voicemail. The AI learns the speaker's pitch, tone, accent and cadence and can then generate new speech in their voice. The tools required are free or very low cost and widely available, which means voice cloning is no longer limited to well-resourced attackers. It is accessible to anyone with malicious intent and a few hours to learn the process.
Are UK SMEs really at risk from deepfake fraud?
Yes. SMEs are targeted for specific reasons that can make them more vulnerable than larger organisations. Fewer people have authority over financial transactions, decision-making processes are less formalised, and there is typically no dedicated security function to assess and respond to threats. The cost of creating a deepfake attack is now negligible, which removes the financial incentive to target only large businesses. Any business where a single person can authorise a significant payment based on a verbal or video instruction, without a documented verification process, is exposed to this type of fraud.
What is the single most effective protection against deepfake fraud?
A pre-shared verbal code word, agreed internally and required before any payment instruction received by phone or video call is acted upon. This is a simple, free, procedural control that defeats even a technically convincing deepfake, because the AI cannot know a secret that has never been made public. Paired with a call-back rule, where any payment instruction triggers a call to a number already on record rather than any number provided during the original contact, it removes the human single point of failure that deepfake attacks are designed to exploit. These procedural controls are more reliable than detection software, which does not perform consistently against the most sophisticated modern fakes.
Does cyber insurance cover deepfake fraud?
It depends on your policy. Many UK cyber insurance policies include social engineering and fraud coverage that can extend to deepfake-enabled payment diversion, but the specific terms vary and some policies exclude certain types of instruction-based fraud. Some insurers are beginning to require specific process controls, including dual authorisation for payments and documented verification procedures, as conditions of coverage. It is worth reviewing your policy wording and raising the question of deepfake fraud coverage with your broker at renewal, particularly if your business handles significant payment volumes.
Can Advantage help us assess and improve our protection against deepfake attacks?
Yes. Advantage provides cyber security advisory and managed IT services for UK SMEs, including review of payment authorisation processes, social engineering risk assessment, MFA coverage audits and staff awareness training. We can assess where your business is currently exposed and design a practical improvement programme that fits your size and risk profile. The starting point is a free conversation with no obligation. Book at advantage.co.uk or call 020 3004 4600.