YouTube draws a line on deepfakes involving politicians and journalists

With deepfakes becoming more common, YouTube has expanded access to its AI-driven likeness detection system to a pilot group of government officials, journalists and political candidates. The step follows an earlier rollout of the tool to creators in the company’s Partner Program.

OPIS

AI video tools are easy to access, and the content they produce keeps getting more realistic, flooding social media platforms, including YouTube. Issues arise when this content is used beyond entertainment to fabricate material and spread misinformation. Help Net Security has previously reported on the issue, including its role in geopolitical conflicts.

“YouTube is where the world comes to understand the events shaping their lives, from breaking news to the debates that drive civic discourse. As AI-generated content evolves, the individuals at the center of these conversations need reliable tools to protect their identities,” Rene Ritchie, Head of Editorial & Creator Liaison at YouTube, said.

The tool works like Content ID, YouTube’s automated system that helps copyright owners find and manage their material on the platform, but it focuses on a person’s likeness, scanning AI-generated videos for impersonation.

If a match is found, individuals can review the content and request removal when it violates privacy rules. Detection alone does not ensure takedown.

“YouTube has a long history of protecting free expression and content in the public interest, including preserving content such as parody and satire, even when used to critique world leaders or influential figures,” the company said in a blog post.

To prevent abuse and ensure the tool is used only by those it is designed to protect, participants must verify their identity before enrolling in likeness detection. The information submitted during setup is used solely for identity verification and to operate the safety feature, and is not used to train Google’s generative AI models, the company said.

YouTube also noted it will continue to advocate for stronger legal protections, backing legislation such as the NO FAKES Act to safeguard people’s likenesses and set standards for responsible AI use.

It has not been disclosed which politicians or officials are part of the first group of testers.

Don't miss