AI deepfake scams have become increasingly prevalent in the past few years, largely due to the increasing realism of generative artificial intelligence and machine learning. From imitating a colleague's voice on the phone to extracting sensitive information, or weaponising public figures to trick victims with misleading information, they are one of the most complex modern scams utilised by cybercriminals.
Here's a full breakdown on deepfake scams, as well as how to best protect your organisation from this prolific attack method.
What are deepfake scams?
Deepfake scams weaponise generative AI to produce synthetic audio or visuals that impersonate internal team members, clients, or business partners. They are so intelligent that even facial expressions and voice patterns can be accurately imitated, using audio samples or a video example of the imitated person's face moving to achieve this.
The goal of this generated content is to trick victims into believing false communications are legitimate, with the aim of sourcing sensitive company data from the employees of the targeted organisation.
How are deepfake scams used?
Whether it's a few-second clip of voice cloning technology or an entire fabricated video conference, deepfake scams are a broad and complicated method of digital fraud to defend against. Bad actors intentionally vary their use of deepfake technology to minimise the efficacy of detection methods.
Here are some of the key uses of deepfake technology:
- To manipulate business leaders' public perception through inauthentic videos, pressuring cooperation through the threat of releasing the deepfake video and causing reputational damage.
- To exploit employees and customers through fake videos of a reputable person's voice within the organisation, manipulating them into sharing private information, granting access to internal systems/data, or transferring money.
- To commit financial crime, fraudsters trick unwitting individuals into completing financial transactions, providing financial information, sending wire transfers, or opening fraudulent accounts. Finance professionals should therefore take particular care when managing accounts.
Examples of deepfake attacks
Here are two major cases that highlight how criminals are using this technology to commit financial fraud:
The €220,000 voice clone scam - UK, 2019
Malicious actors used AI to clone the voice of a German energy executive and called the UK CEO of a subsidiary, instructing him to transfer €220,000 for a fake acquisition. Believing the request was legitimate, the CEO complied.
The funds were quickly rerouted through Hungary and Mexico. The case later emerged via insurer Euler Hermes, which kept the companies anonymous.
$35 million bank fraud - UAE, 2020
A UAE bank manager received a call from someone impersonating a senior director, requesting a $35m transfer for an acquisition. The voice sounded authentic, and the request was supported by convincing fake emails and documents imitating senior executives.
Investigators later confirmed AI voice cloning had been used to carry out the scam.
What weaknesses do deepfake scams exploit?
Deepfake scams exploit several key weaknesses in a business’s security, primarily by targeting human trust and operational gaps with manipulated content to exploit victims.
One major vulnerability is employees’ default trust in communications from executives or colleagues, making them less likely to question requests for sensitive data or financial transfers. Many hesitate to challenge their boss, and attackers exploit this trust, especially in enterprises with complex hierarchies.
Deepfakes convincingly mimic voices and appearances, allowing scammers to impersonate senior leaders and manipulate employees.
Businesses without strict verification processes are at greater risk, as unusual or urgent requests may go unconfirmed. Many employees lack training to recognise deepfake signs such as subtle audio glitches or unnatural facial movements, and don’t know how to respond.
Without awareness and protocols, businesses become easy targets, often unaware of the threat until it’s too late.
Five ways to protect your business from deepfake scams
While deepfake scams are becoming increasingly sophisticated due to the accelerating capabilities of generative AI, there are methods for discerning authentic content from inauthentic content.
Here are our top recommendations for protecting your organisation from being victimised by fake content:
1. Limit authentication requests
Limiting authentication requests reduces the chances of attackers exploiting internal weaknesses, particularly MFA fatigue.
Fewer prompts mean fewer opportunities for manipulation, helping teams stay alert to suspicious activity. When authentication becomes unusual or excessive, it acts as a red flag, making it easier for users to detect anomalies, including potential deepfake scam attempts.
Fewer, smarter authentication steps can significantly strengthen your organisation’s security posture.
2. Strong verification processes
Strong verification processes are a business's backbone in defending against deepfake attacks. Multiple layers of identity confirmation make it more difficult for deepfake scammers to bypass security. Therefore, using multiple channels or methods to validate requests helps reduce the risk of false or fraudulent communications.
3. AI detection
Improving internal understanding and awareness of AI-generated deepfakes also helps protect businesses. Trained staff can spot unnatural speech patterns, facial movements, or mismatched sounds, all common in audio deepfakes. Using systems to analyse communications enables early detection of AI manipulation. This reduces the risk of fraud, protects sensitive information, and maintains trust.
Empowering employees with training to recognise deepfakes strengthens cybersecurity and supports a more resilient organisation.
4. Employee education
Employee education is crucial in defending against deepfake scams. Training provides internal teams the tools to spot deepfakes and other unusual requests. It promotes a culture of caution and verification across the business, encouraging staff at all levels to follow strict confirmation protocols, not just in internal business communication systems but across social media.
5. Testing
Testing is absolutely crucial when preventing deepfake scams. Simulated attacks train employees to recognise and respond to threats, revealing how well teams react. It also highlights weaknesses in security and verification processes, helping businesses improve.
Regular social engineering testing ensures protocols stay strong and effective. The process helps identify weaknesses and provides valuable insights into your business's defences.
Avoid deepfake scams with penetration testing from OnSecurity
OnSecurity's expert penetration testing services blend simulated attacks with professional assessment to provide your organisation with actionable insights and effective recommendations.
Our CREST-accredited team of pentesters specialises in identifying phishing techniques and deepfake scams. Combined with continuous vulnerability scanning and advanced threat intelligence, we monitor your environment to detect risks early, helping you stay one step ahead of scams and other emerging threats.
Got a query? Contact us today for a quick and comprehensive response.