Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

AWS CodeBuild misconfiguration exposed GitHub repository to potential supply chain attacks

Critical flaw in WordPress Modular DS plugin can be actively exploited to gain administrator access

Researchers uncover a re-prompting attack that allows data to be extracted from Microsoft Copilot with a single click

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » US senators demand answers from X, Meta, Alphabet and more on sexual deepfakes
Startups

US senators demand answers from X, Meta, Alphabet and more on sexual deepfakes

userBy userJanuary 15, 2026No Comments6 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

The tech industry’s non-consensual sexual deepfakes problem is now bigger than just an X.

In a letter to the leaders of X, Meta, Alphabet, Snap, Reddit and TikTok, several US senators are asking the companies to provide evidence that they have “robust protections and policies” in place and explain how they intend to curb the rise of sexual deepfakes on their platforms.

The senators also required companies to preserve all documentation and information related to the creation, detection, management, and monetization of AI-generated sexual images, as well as related policies.

The letter comes hours after X announced updates to Grok that would prohibit editing of real people in revealing clothing and restrict paid subscribers from creating and editing images through Grok. (X and xAI are part of the same company.)

The senators pointed to media reports about how easily and frequently Grok generated sexual and nude images of women and children, and noted that the platform’s guardrails to prevent users from posting non-consensual sexual images may not be enough.

“We recognize that many companies maintain policies against non-consensual intimate images and sexual exploitation, and that many AI systems claim to block explicit pornography. However, in reality, as seen in the examples above, users find ways to circumvent these guardrails. Or they fail,” the letter reads.

Grok, and by extension X, have been heavily criticized for enabling this trend, but other platforms are no exception.

tech crunch event

san francisco
|
October 13-15, 2026

Deepfakes first became popular when a page showing synthetic porn videos of celebrities went viral on Reddit, which was later taken down by the platform in 2018. Sexual deepfakes targeting celebrities and politicians are proliferating on TikTok and YouTube, but they typically originate elsewhere.

Meta’s oversight board last year filed two lawsuits over explicit AI images of female public figures, and the platform disabled apps that sell ads on its services, but later sued a company called CrushAI. There have been reports of children spreading deepfakes of their peers on Snapchat. Telegram, which is not included in the senator’s list, is also notorious for hosting bots aimed at taking pictures of women undressed.

In response to the letter, X pointed to announcements regarding updates to Grok.

“We do not and will not allow any non-consensual intimate media (NCIM) on Reddit, we do not provide the tools to create it, and we take aggressive steps to find and remove it,” a Reddit spokesperson said in an emailed statement. “Reddit strictly prohibits NCIM, which includes fabricated or AI-generated depictions. We also prohibit soliciting this content from others, sharing links to ‘naked’ apps, or discussing how to create this content on other platforms,” ​​the spokesperson added.

Alphabet, Snap, TikTok and Meta did not respond to requests for comment.

The letter asks companies to provide the following information:

Policy definitions for “deepfake” content, “non-consensual intimate images”, or similar terms. An explanation of the company’s policy and enforcement approach to non-consensual AI deepfakes of people’s bodies, non-nude photos, altered clothing, and “virtual undressing.” A description of our current content policies that address edited media and explicit content, as well as internal guidance provided to moderators. How current policies govern AI tools and image generators related to suggestive and intimate content. What filters, guardrails, or countermeasures are in place to prevent the generation and distribution of deepfakes. What mechanisms are companies using to identify deepfake content and prevent it from being re-uploaded? How to prevent users from profiting from such content. How platforms prevent non-consensual monetization of AI-generated content. How can a company’s terms of service ban or suspend users who post deepfakes? What companies are doing to notify victims of non-consensual sexual deepfakes.

The letter is signed by Sen. Lisa Blunt Rochester (D-D-Delaware), Sen. Tammy Baldwin (D-Wis.), Sen. Richard Blumenthal (D-Conn.), Sen. Kirsten Gillibrand (D-New York), Sen. Mark Kelly (D-Ariz.), Sen. Ben Ray Lujan (D-New Mexico), Sen. Brian Schatz (D-Calif.), and Sen. Adam Schiff (D-Calif.).

The move comes just one day after xAI owner Elon Musk said he was “not aware of any images of naked minors produced by Grok.” Following mounting pressure from governments around the world outraged that the lack of guardrails around Grok allowed this to happen, California’s attorney general launched an investigation into xAI’s chatbot late Wednesday.

xAI claimed it was “taking steps to remove illegal content on X.” [CSAM] However, neither the company nor Musk addressed the fact that Grok was allowed to produce such content in the first place.

This issue is not limited to sexually manipulated images without consent. While not all AI-based image generation and editing services allow users to “undress” a person, they can easily generate deepfakes. To name a few examples, OpenAI’s Sora 2 reportedly allows users to generate explicit videos featuring children. Google’s Nano Banana appears to have generated an image of Charlie Kirk getting shot. Racist videos created with Google’s AI video model have garnered millions of views on social media.

The problem becomes even more complex when Chinese image and video production equipment is involved. Many Chinese tech companies and apps, particularly those associated with ByteDance, offer easy ways to edit faces, audio, and video, and their artifacts are spreading across Western social platforms. China has stronger synthetic content labeling requirements at the federal level that the United States does not have, and the public instead relies on piecemeal and questionable enforcement policies from the platforms themselves.

U.S. lawmakers have already passed several bills seeking to regulate deepfake pornography, but their impact has been limited. The Take It Down Act, enacted into federal law in May, aims to criminalize the creation and distribution of sexual images without consent. However, many provisions of the law make it difficult to hold image-generating platforms accountable, as most of their oversight is focused on individual users.

Meanwhile, many states are taking matters into their own hands to protect consumers and elections. New York Gov. Cathy Hochul this week proposed legislation that would require AI-generated content to be labeled as such and ban non-consensual deepfakes that include depictions of opposition candidates during specific periods leading up to elections.


Source link

#Aceleradoras #CapitalRiesgo #EcosistemaStartup #Emprendimiento #InnovaciónEmpresarial #Startups
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleSpotify raises subscription prices in the US again
Next Article How one startup is using prebiotics to alleviate copper deficiency
user
  • Website

Related Posts

How one startup is using prebiotics to alleviate copper deficiency

January 15, 2026

Spotify raises subscription prices in the US again

January 15, 2026

Parloa triples valuation to $3 billion in 8 months with $350 million in funding

January 15, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

AWS CodeBuild misconfiguration exposed GitHub repository to potential supply chain attacks

Critical flaw in WordPress Modular DS plugin can be actively exploited to gain administrator access

Researchers uncover a re-prompting attack that allows data to be extracted from Microsoft Copilot with a single click

How one startup is using prebiotics to alleviate copper deficiency

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Castilla-La Mancha Ignites Innovation: fiveclmsummit Redefines Tech Future

Local Power, Health Innovation: Alcolea de Calatrava Boosts FiveCLM PoC with Community Engagement

The Future of Digital Twins in Healthcare: From Virtual Replicas to Personalized Medical Models

Human Digital Twins: The Next Tech Frontier Set to Transform Healthcare and Beyond

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2026 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.