Deepfake attacks could cost you more than money
In this Help Net Security interview, Camellia Chan, CEO at X-PHY, discusses the dangers of deepfakes in real-world incidents, including their use in financial fraud and political disinformation. She explains AI-driven defense strategies and recommends updating incident response plans and internal policies, integrating detection tools, and ensuring compliance with regulations like the EU’s DORA to mitigate liability.
How have attackers used deepfakes in real-world incidents, even if hypothetically, and how plausible are those tactics becoming?
We’ve already seen deepfakes used in everything from financial fraud to political disinformation. One of the more alarming trends is impersonation scams, where attackers use synthetic audio or video to pose as CEOs or politicians.
A notable example occurred in Hong Kong in 2020, when a bank manager was tricked into transferring $35 million after receiving a phone call from someone he believed to be a company director. The fraudster used AI-based voice cloning to perfectly mimic the executive’s voice, and backed up the request with convincing emails and documentation. This case was one of the earliest and most high-profile examples of deepfake voice fraud in the financial sector.
This is just one example, but recently I’ve seen an increasing number of reports where companies were tricked into transferring large sums of money based on deepfaked video calls – some of our partners, customers, and even my internal staff have highlighted this as a concern. So clearly, these are no longer hypotheticals – they’re happening now, and the tools to create them are increasingly accessible.
The tactics are highly plausible because they exploit our trust in visual and auditory information. Remember the saying, seeing is believing? We can’t even say that anymore. As long as people rely on what they see and hear as evidence, these attacks will be both effective and difficult to detect without the right tools.
What role does AI play in defending against deepfakes? Are there promising models or architectures specifically designed for this?
AI is both the problem and the solution when it comes to deepfakes. On one hand, it powers the creation of synthetic media. On the other hand, it’s our best line of defense. Advanced machine learning models, especially multi-modal AI, are becoming increasingly effective at spotting subtle, sophisticated signs of manipulation – from unnatural blinking and facial inconsistencies to mismatched audio-visual cues. The value of using AI lies in its ability to provide protection in real-time, with better privacy and faster response times – crucial as threats become more targeted and dynamic.
Some promising AI models used are Convolutional Neural Networks (CNNs), Long Short-Term Memory networks (LSTMs) and Gated Recurrent Units (GRUs). CNNs are used to analyze minute details in visual data, LSTMs and GRUs are memory-based AI models to track audio-visual syncing.
Deepfake detection is also increasingly being integrated into broader security ecosystems, where every layer – from hardware to data to content – acts as a checkpoint for authenticity, adding a vital layer of trust. By combining deepfake detection with robust endpoint security, organizations can ensure that every device is equipped to verify the integrity of digital communications quickly, privately, and without the need to transmit sensitive content to the cloud.
How should organizations update their incident response plans to include deepfake scenarios?
Treat deepfakes like any other cyber threat and apply a zero-trust mindset. That means don’t assume anything is real just because it looks or sounds convincing.
Update your response plan to include steps for verifying video or audio content, especially if it’s being used to request sensitive actions. Build a risk model that considers how deepfakes could be used to target critical business processes, such as executive communications, financial approvals, or customer interactions. Make sure your team knows how to spot red flags, who to alert, and how to document the incident.
Use detection tools that can scan media in real time and save flagged content for review. The faster you can identify and act, the more damage you can prevent. In today’s environment, it’s safer to question first and trust only after you verify.
What internal policies should organizations put in place to mitigate the risk of deepfake attacks?
Organizations should put clear policies in place around verification, detection, and escalation. Any sensitive request – involving money, credentials, or confidential data – should require extra verification, like a call-back or secondary approval.
Deepfake awareness should be built into regular training so employees can spot warning signs early. Utilizing the detection tools to support teams by scanning and flagging suspicious media in real time, helping them make faster, safer decisions.
Incident response plans must also cover how to escalate, preserve evidence, and communicate if a deepfake is suspected.
At the end of the day, questioning unusual communications must become the norm, not the exception.
Is there a risk of liability or compliance exposure if a company falls victim to a deepfake? How should that be factored into planning?
Yes, absolutely – especially if data is leaked or money is lost. Regulators expect companies to take reasonable steps to prevent this kind of fraud. Under laws like the EU’s Digital Operational Resilience Act (DORA), organizations have a duty to protect personal data and ensure operational resilience against cyber threats. A failure to anticipate or guard against deepfake-driven attacks could increase the risk of liability, fines, and reputational damage.
That’s why it’s important to include deepfakes in your cybersecurity and risk planning. Work with your legal team, update your processes, and make sure your systems and staff are ready. If something does happen, you want to be able to show you took it seriously and were prepared.