Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

Researchers discover 341 malicious ClawHub skills that steal data from OpenClaw users

OpenClaw bug allows one-click remote code execution via malicious link

Microsoft begins phasing out NTLM with three-phase plan to migrate Windows to Kerberos

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » Coalition calls on federal government to ban Grok over non-consensual sexual content
Startups

Coalition calls on federal government to ban Grok over non-consensual sexual content

userBy userFebruary 2, 2026No Comments6 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

A coalition of nonprofit organizations is calling on the U.S. government to immediately halt the deployment of Grok, a chatbot developed by Elon Musk’s xAI, in federal agencies, including the Department of Defense.

The open letter, shared exclusively with TechCrunch, tracks a number of concerning behaviors by large-scale language models over the past year, including a recent trend of X users asking Grok to convert photos of real women and, in some cases, children into sexualized images without their consent. According to some reports, Grok generated thousands of non-consensual and explicit images every hour, which were widely distributed on Musk’s social media platform X, which is owned by xAI.

The letter, signed by advocacy groups including Public Citizen, the Center for AI and Digital Policy, and the Consumer Federation of America, said: “We are deeply concerned that the federal government continues to deploy AI products that are fraught with system-level failures and that lead to the production of non-consensual sexual images and child sexual abuse material.” “This is concerning given the administration’s executive orders, guidance, and the recent Take It Down Act passed with support from the White House.” [Office of Management and Budget] It has not yet directed federal agencies to phase out Grok. ”

Last September, xAI reached an agreement with the government’s purchasing arm, the General Services Administration (GSA), to sell Grok to a federal agency under the executive branch. Two months ago, xAI signed a contract worth up to $200 million with the Department of Defense, alongside Anthropic, Google, and OpenAI.

In mid-January, amid the X scandal, Defense Secretary Pete Hegseth said Mr. Gloch would work with Google’s Gemini within the Pentagon’s networks, handling both classified and unclassified documents, which experts say is a national security risk.

The authors of the letter claim that Grok has proven incompatible with government requirements for AI systems. According to OMB guidance, systems that pose significant and foreseeable risks that cannot be adequately mitigated should be discontinued.

“Our main concern is that Grok has been fairly consistently shown to be an insecure large-scale language model,” JB Branch, an accountability advocate at Public Citizen Big Tech and one of the authors of the letter, told TechCrunch. “However, Grok also has a deep history of problems, including anti-Semitic rants, sexist rants, and sexual images of women and children.”

tech crunch event

boston, massachusetts
|
June 23, 2026

Several governments have indicated reluctance to engage with Grok following its actions in January. Grok is based on a series of incidents on X, including generating anti-Semitic posts and calling himself “Mecha-Hitler.” Indonesia, Malaysia, and the Philippines all blocked access to Grok (and later lifted their bans), and the European Union, United Kingdom, South Korea, and India are actively investigating xAI and X for data privacy and distribution of illegal content.

The letter also comes a week after Common Sense Media, a nonprofit that reviews family-friendly media and technology, released a damning risk assessment that found Grok to be one of the most dangerous to children and teens. Based on the report’s findings, including Grok’s propensity to offer dangerous advice, share information about drugs, generate violent and sexual images, spout conspiracy theories, and produce biased output, some might argue that Grok isn’t all that safe for adults either.

“If we know that large-scale language models have been declared unsafe or secure by AI safety experts, why on earth should we be working with the most sensitive data we have?” Branch said. “From a national security perspective, it just doesn’t make sense.”

Andrew Christianson, a former National Security Agency contractor and current founder of Gobbi AI, a no-code AI agent platform for classified environments, says the use of closed-source LLMs is problematic in general and for the Department of Defense in particular.

“Closed weights mean you can’t see inside the model and can’t audit how the model makes decisions,” he said. “Closed code means you can’t inspect the software or control where it runs. The Department of Defense is trying to be closed to both, which is a terrible combination for national security.”

“These AI agents are more than just chatbots,” Christianson added. “They can take actions, access systems, and move information. They need to be able to see exactly what they’re doing and how they’re making decisions. Open source provides that. Proprietary cloud AI doesn’t.”

The risks of using broken or insecure AI systems extend beyond national security use cases. Branch pointed out that LLMs, which have been found to have disproportionate and discriminatory outcomes, can also have disproportionately negative consequences for people, particularly when used in sectors related to housing, labor and justice.

OMB has not yet released a consolidated inventory of federal AI use cases for 2025, but TechCrunch reviewed several agency use cases. Most of them do not use Grok or do not disclose their use of Grok. Apart from the Department of Defense, the Department of Health and Human Services also appears to be actively using Grok, primarily for scheduling and managing social media posts, and creating first drafts of documents, instructions, and other communication materials.

Branch cited philosophical alignment between Grok and the administration as a reason why chatbots’ shortcomings are overlooked.

“Grok’s brand is the ‘anti-woke big language model,’ and that’s the philosophy of this administration,” Branch said. “If you have an administration that has multiple issues with people who are accused of being neo-Nazis or white supremacists, and you have an extensive language model tied to that kind of behavior, I imagine they might be more inclined to use that.”

This is the third letter the Coalition has written after raising similar concerns in August and October last year. In August, xAI launched a “spicy mode” on Grok Imagine, leading to the mass creation of non-consensual, sexually explicit deepfakes. TechCrunch also reported in August that Grok’s private conversations were being indexed in Google Search.

Prior to the October letter, Mr. Groch was accused of providing false information about the election, including false deadlines for changing ballots and political deepfakes. xAI also launched Grokipedia, which researchers found to legitimize scientific racism, HIV/AIDS skepticism, and vaccine conspiracies.

In addition to immediately halting the federal government’s deployment of Grok, the letter also calls on OMB to formally investigate Grok’s security flaws and whether proper oversight processes were in place for the chatbot. It also requires the agency to publicly clarify whether Grok has been assessed as complying with President Trump’s executive order requiring LLMs to be truth-seeking and neutral, and whether it meets OMB’s risk mitigation standards.

“The administration needs to pause and reassess whether Grok meets these standards,” Branch said.

TechCrunch has reached out to xAI and OMB for comment.


Source link

#Aceleradoras #CapitalRiesgo #EcosistemaStartup #Emprendimiento #InnovaciónEmpresarial #Startups
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleCarbon Robotics built an AI model to detect and identify plants
Next Article Ring offers “Search Party” feature to help non-Ring camera owners find lost dogs
user
  • Website

Related Posts

Ring offers “Search Party” feature to help non-Ring camera owners find lost dogs

February 2, 2026

Carbon Robotics built an AI model to detect and identify plants

February 2, 2026

Amazon documentary ‘Melania’ grosses $7 million in opening weekend

February 1, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

Researchers discover 341 malicious ClawHub skills that steal data from OpenClaw users

OpenClaw bug allows one-click remote code execution via malicious link

Microsoft begins phasing out NTLM with three-phase plan to migrate Windows to Kerberos

Ring offers “Search Party” feature to help non-Ring camera owners find lost dogs

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Castilla-La Mancha Ignites Innovation: fiveclmsummit Redefines Tech Future

Local Power, Health Innovation: Alcolea de Calatrava Boosts FiveCLM PoC with Community Engagement

The Future of Digital Twins in Healthcare: From Virtual Replicas to Personalized Medical Models

Human Digital Twins: The Next Tech Frontier Set to Transform Healthcare and Beyond

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2026 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.