Microsoft builds deepfakes detection tool to combat election disinformation

Microsoft has developed a deepfakes detection tool to help news publishers and political campaigns, as well as technology to help content creators “mark” their images and videos in a way that will show if the content has been manipulated post-creation.

deepfakes detection tool

The deepfakes problem

Deepfakes – photos and videos in which a person is replaced with someone else’s likeness through the power of artificial intelligence (AI) – are already having an impact individuals’ lives, politics and society in general. Wielded by those who have an interest in spreading easily-believable disinformation, the technology is expected to wreak even more havoc in the long run.

The existence of deepfake technology became more widely known in 2017 when a Reddit user showed that it’s easy to create relatively realistic porn videos of celebrities. The technology has been perfected since then and will surely continue to evolve and go on to produce ever more difficult-to-spot deepfakes.

As with every technology that can be used for noble and despicable purposes, its continuous development will be fueled by the inevitable arms race between those who (mis)use it for the latter and those who try to prevent or minimize the fallout of those efforts.

Microsoft’s deepfakes detection tool

In the short run, though, Microsoft is trying to combat disinformation via deepfakes in time to make a positive impact on the integrity of November’s U.S. presidential election.

That’s why the company has made available Microsoft Video Authenticator, a tool that can analyze a photo or video and provide a confidence score on whether it has been artificially manipulated.

“In the case of a video, it can provide this percentage in real-time on each frame as the video plays. It works by detecting the blending boundary of the deepfake and subtle fading or greyscale elements that might not be detectable by the human eye,” Microsoft’s Tom Burt, Corporate VP of Customer Security & Trust and Eric Horvitz, Chief Scientific Officer, explained.

To prevent bad actors from analyzing the tool and using the gleaned knowledge to make their deepfakes more likely to fool it, the tool will initially not be accessible to the general public.

Instead, the AI Foundation’s Reality Defender 2020 (RD2020) initiative will make Video Authenticator available to organizations involved in the democratic process, such as news outlets and political campaigns.

“[The initiative] will guide organizations through the limitations and ethical considerations inherent in any deepfake detection technology,” Burt and Horvitz noted.

“Second, we’ve partnered with a consortium of media companies including the BBC, CBC/Radio-Canada and the New York Times on Project Origin, which will test our authenticity technology and help advance it as a standard that can be adopted broadly. The Trusted News Initiative, which includes a range of publishers and social media companies, has also agreed to engage with this technology. In the months ahead, we hope to broaden work in this area to even more technology companies, news publishers and social media companies.”

Knowledge is power

The company has also had a hand in creating another technology that will power the aforementioned Project Origin.

“There are few tools today to help assure readers that the media they’re seeing online came from a trusted source and that it wasn’t altered,” Burt and Horvitz pointed out, and explained how this technology will allow that, via two components.

“The first is a tool built into Microsoft Azure that enables a content producer to add digital hashes and certificates to a piece of content. The hashes and certificates then live with the content as metadata wherever it travels online. The second is a reader – which can exist as a browser extension or in other forms – that checks the certificates and matches the hashes, letting people know with a high degree of accuracy that the content is authentic and that it hasn’t been changed, as well as providing details about who produced it.”

Finally, Microsoft is also aware that by increasing people’s media literacy and knowledge of how deepfakes look like and are used they will be more likely to spot deepfakes, understand the motives behind them, and objectively evaluate the value of both.

With that aim, they are supporting a public service announcement campaign encouraging people to check whether information comes from a reputable news organization before they share it on social media, and have helped create and will promote the Spot the Deepfake Quiz, which should help U.S. voters (and everybody else, really) to “learn about synthetic media, develop critical media literacy skills and gain awareness of the impact of synthetic media on democracy.”

Don't miss