2020: A year of deepfakes and deep deception

Over the past year, deepfakes, a machine learning model that is used to create realistic yet fake or manipulated audio and video, started making headlines as a major emerging cyber threat. The first examples of deepfakes seen by the general public were mainly amateur videos created using free deepfake tools, typically of celebrities’ faces superimposed into pornographic videos.

deepfake technology

Even though these videos were of fairly low quality and could be reasonably distinguished as illegitimate, people understood the potential impact this new technology could have on our ability to separate fact from fiction. This is especially of concern in the world of politics, where deepfake technology can be weaponized against political figures or parties to manipulate public perception and sway elections or even the stock market.

A few years ago, deepfake technology would be limited to use by nation-states with the resources and advanced technology needed to develop such a powerful tool. Now, because deepfake toolkits are freely available and easy to learn, anyone with internet access, time, and motive can churn out deepfake videos in real-time and flood social media channels with fake content.

Also, as the toolkits become smarter, they require less material to work from to generate fake content. The earlier generation of tools required hours of video and audio – big data sets – for the machine to analyze and then manipulate. This meant people in the spotlight such as politicians, celebrities, high-profile CEOs or anyone with a large web presence had a higher chance of being spoofed. Now, a video can be fabricated from a single photo. In the future, where all it takes is your Facebook profile image and an audio soundbite of you from an Instagram story, everybody becomes a target.

The reality of non-reality

Deepfakes are so powerful because they subvert a basic human understanding of reality: if you see it and hear it, it must be real. Deepfakes untether truth from reality. They also elicit an emotional response. If you see something upsetting, and then find out it was fake, you have still had a negative emotional reaction take place and make subconscious associations with what you saw and how you feel.

This October, Governor Gavin Newsom signed California’s AB 730, known as the “Anti-Deepfake Bill,” into law with the intent to quell the spread of malicious deepfakes before the 2020 election. While a laudable effort, the law itself falls flat. It places an artificial timeline that only applies to deepfake content distributed with “actual malice” within 60 days of an election. It exempts distribution platforms from the responsibility to monitor and remove deepfake content and instead relies on producers of the videos to self-identify and claim ownership, and the burden of proof for “actual malice” will not be a clear-cut process.

This law was likely not designed with the primary purpose to be enforced, but more likely serve as the first step by lawmakers to show that they understand deepfakes are a serious threat to democracy and this is a battle that is just beginning. Ideally, this law will influence and inform other state and federal efforts and serve as a starting template to build upon with more effective and enforceable legislation.

Deepfake technology as a business threat in 2020

To date, most of the discussion around deepfake technology has been centered around its potential for misinformation campaigns and mass manipulation fueled through social media, especially in the realm of politics. 2020 will be the year we start to see deepfakes become a real threat to the enterprise, one that cyber defense teams are not yet equipped to handle.

Spearphishing targets high-level employees, typically to trick them into completely a manual task such as paying a fake invoice, sending physical documents, or manually resetting a user’s credentials for the cybercriminal. These tend to be more difficult to detect from a technology perspective, as the email doesn’t contain any suspicious links or attachments and is commonly used in conjunction with a BEC attack (when hackers gain control of an employees’ email account, allowing them to send emails from legitimate addresses). According to the FBI, BEC attacks have cost organizations worldwide more than $26 billion over the past three years.

Deepfakes have the ability to supercharge these attacks. Imagine receiving an email from your company’s CEO asking you to engage with some financial action, then receiving a follow-up text message from your CEO’s mobile number, and finally a voicemail with their voice addressing you by name, referencing previous conversations you’ve had with them, and all in the CEO’s voice.

There comes a point where the attack breaks the truth-barrier and it makes more sense to accept the request as real and authentic than to consider the possibility that it’s fake. Eventually, when the deepfake technology advances even further, it’s easy to imagine a scenario where you are on a video call with what you think is your CEO but is a deepfake video being created in real-time. Earlier this year a CEO was deceived by an AI-generated voice into transferring $243,000 to a bank account he believed to be of a company supplier.

Currently, the security industry has no appliances, email filters, or any technology to defend against deepfakes. There is progress being made, however. For example, Facebook, Microsoft, and university researchers launched the Deepfake Detection Challenge as a rallying cry to jumpstart the development of open source deepfake detection tools. The Defense Advanced Research Projects Agency (DARPA) made an announcement for the Semantic Forensics or SemaFor program which aims to develop “semantic forensics” as an additional method of defense to the statistical detection techniques used in the past.

The only remedy that currently exists is to educate users about these new types of attacks and to be on the alert for any behavior that seems out of the ordinary from the recipient, no matter how small. Trust is no longer a luxury we can afford.

Don't miss