Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

FortiGate devices are exploited to infiltrate the network and steal service account credentials

KadNap malware infects over 14,000 edge devices, powering stealth proxy botnet

Legora reaches $5.55 billion valuation as AI legal technology boom continues

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » YouTube extends AI deepfake detection to politicians, government officials, and journalists
Startups

YouTube extends AI deepfake detection to politicians, government officials, and journalists

userBy userMarch 10, 2026No Comments4 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

YouTube announced Tuesday that it will expand its similarity detection technology that identifies AI-generated deepfakes to a pilot group of government officials, political candidates, and journalists. Members of the pilot group will have access to tools that will allow them to detect abusive AI-generated content and request removal if they believe it violates YouTube’s policies.

The technology itself began rolling out to the roughly 4 million YouTube creators in the YouTube Partner Program last year after previous testing.

Similar to YouTube’s existing Content ID system, which detects copyrighted material in videos uploaded by users, the similarity detection feature looks for simulated faces created with AI tools. These tools can be used to spread misinformation or manipulate people’s perceptions of reality by leveraging deepfaked personas of politicians, government officials, and other prominent figures to say or do things in AI videos that they did not do in real life.

With a new pilot program, YouTube aims to balance users’ free expression with the risks associated with AI technology that can generate convincing likenesses of celebrities.

“This expansion is really about the integrity of public conversation,” Leslie Miller, YouTube’s vice president of government affairs and public policy, said at a press conference ahead of Tuesday’s launch. “We know that the risk of AI impersonation is particularly high for people in public spaces. But while we provide this new shield, we are also careful in how it is used,” she said.

Image credit: YouTube

Miller explained that not all matches found will be removed if requested. Instead, YouTube evaluates each request based on existing privacy policy guidelines to determine whether the content is parody, a protected form of free expression, or political criticism.

The company said it supports D.C.’s NO FAKES Act, which regulates the use of AI to reproduce unauthorized voice or visual likenesses of individuals, and is advocating for these protections at the federal level as well.

To use the new tool, eligible pilot testers must first prove their identity by uploading a selfie and government ID. You can then create a profile, view the matches that appear, and request removal if necessary. YouTube says it will eventually be able to prevent uploads of violating content before it’s published, and potentially monetize those videos, similar to how its Content ID system works.

The company did not say which politicians or officials would be among the first testers, but said the goal is to make the technology widely available over time.

Image credit: YouTube

These AI videos are labeled as such, but the placement of these labels is inconsistent. For some videos, labels appear in the video description, while for videos that focus on more “sensitive topics,” labels are applied at the beginning of the video. This is the same approach YouTube takes for all its AI-generated content.

“There’s a lot of AI-generated content, but the differences don’t really matter to the content itself,” said Amjad Hanif, YouTube’s vice president of creator products, explaining the label’s positioning. “It could be an AI-generated cartoon. So I think there’s a judgment as to whether that’s a category worthy of a very visible disclaimer,” he said.

YouTube is not currently disclosing how many of these AI deepfakes have been removed by creators using the deepfake detection technology, but said the amount of content removed to date is “very small.”

“I’m thinking a lot [creators]We’re just aware of what’s being created, but most of it turns out to be fairly benign or additive to the business as a whole, so the actual volume of takedown requests is really, really low,” Hanif said.

This may not apply to deepfakes of government officials, politicians, and journalists.

Over time, YouTube plans to bring its deepfake detection technology to more areas, including other intellectual property such as recognizable speaking voices and popular characters.


Source link

#Aceleradoras #CapitalRiesgo #EcosistemaStartup #Emprendimiento #InnovaciónEmpresarial #Startups
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleNew ‘LeakyLooker’ flaw in Google Looker Studio could allow cross-tenant SQL queries
Next Article Meta acquired Moltbook, a social network of AI agents that went viral with fake posts.
user
  • Website

Related Posts

Legora reaches $5.55 billion valuation as AI legal technology boom continues

March 10, 2026

Meta acquired Moltbook, a social network of AI agents that went viral with fake posts.

March 10, 2026

Founders Fund is nearing a $6 billion offering for its latest growth fund, sources say

March 10, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

FortiGate devices are exploited to infiltrate the network and steal service account credentials

KadNap malware infects over 14,000 edge devices, powering stealth proxy botnet

Legora reaches $5.55 billion valuation as AI legal technology boom continues

Meta acquired Moltbook, a social network of AI agents that went viral with fake posts.

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Castilla-La Mancha Ignites Innovation: fiveclmsummit Redefines Tech Future

Local Power, Health Innovation: Alcolea de Calatrava Boosts FiveCLM PoC with Community Engagement

The Future of Digital Twins in Healthcare: From Virtual Replicas to Personalized Medical Models

Human Digital Twins: The Next Tech Frontier Set to Transform Healthcare and Beyond

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2026 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.