YouTube announced Tuesday that it will expand its similarity detection technology that identifies AI-generated deepfakes to a pilot group of government officials, political candidates, and journalists. Members of the pilot group will have access to tools that will allow them to detect abusive AI-generated content and request removal if they believe it violates YouTube’s policies.
The technology itself began rolling out to the roughly 4 million YouTube creators in the YouTube Partner Program last year after previous testing.
Similar to YouTube’s existing Content ID system, which detects copyrighted material in videos uploaded by users, the similarity detection feature looks for simulated faces created with AI tools. These tools can be used to spread misinformation or manipulate people’s perceptions of reality by leveraging deepfaked personas of politicians, government officials, and other prominent figures to say or do things in AI videos that they did not do in real life.
With a new pilot program, YouTube aims to balance users’ free expression with the risks associated with AI technology that can generate convincing likenesses of celebrities.
“This expansion is really about the integrity of public conversation,” Leslie Miller, YouTube’s vice president of government affairs and public policy, said at a press conference ahead of Tuesday’s launch. “We know that the risk of AI impersonation is particularly high for people in public spaces. But while we provide this new shield, we are also careful in how it is used,” she said.

Miller explained that not all matches found will be removed if requested. Instead, YouTube evaluates each request based on existing privacy policy guidelines to determine whether the content is parody, a protected form of free expression, or political criticism.
The company said it supports D.C.’s NO FAKES Act, which regulates the use of AI to reproduce unauthorized voice or visual likenesses of individuals, and is advocating for these protections at the federal level as well.
To use the new tool, eligible pilot testers must first prove their identity by uploading a selfie and government ID. You can then create a profile, view the matches that appear, and request removal if necessary. YouTube says it will eventually be able to prevent uploads of violating content before it’s published, and potentially monetize those videos, similar to how its Content ID system works.
The company did not say which politicians or officials would be among the first testers, but said the goal is to make the technology widely available over time.

These AI videos are labeled as such, but the placement of these labels is inconsistent. For some videos, labels appear in the video description, while for videos that focus on more “sensitive topics,” labels are applied at the beginning of the video. This is the same approach YouTube takes for all its AI-generated content.
“There’s a lot of AI-generated content, but the differences don’t really matter to the content itself,” said Amjad Hanif, YouTube’s vice president of creator products, explaining the label’s positioning. “It could be an AI-generated cartoon. So I think there’s a judgment as to whether that’s a category worthy of a very visible disclaimer,” he said.
YouTube is not currently disclosing how many of these AI deepfakes have been removed by creators using the deepfake detection technology, but said the amount of content removed to date is “very small.”
“I’m thinking a lot [creators]We’re just aware of what’s being created, but most of it turns out to be fairly benign or additive to the business as a whole, so the actual volume of takedown requests is really, really low,” Hanif said.
This may not apply to deepfakes of government officials, politicians, and journalists.
Over time, YouTube plans to bring its deepfake detection technology to more areas, including other intellectual property such as recognizable speaking voices and popular characters.
Source link
