Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

Resolve AI, a startup led by former Splunk executives, reaches $1 billion Series A valuation

Establish a venture-backable company in a highly regulated field

Cursor continues acquisition spree with deal with Graphite

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » State attorneys general warns Microsoft, OpenAI, Google and other AI giants to correct ‘delusional’ output
Startups

State attorneys general warns Microsoft, OpenAI, Google and other AI giants to correct ‘delusional’ output

userBy userDecember 11, 2025No Comments3 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

Following a series of disturbing mental health incidents related to AI chatbots, a group of state attorneys general has written to top companies in the AI ​​industry, warning them that they risk violating state law if they don’t fix their “delusional output.”

The letter, signed by dozens of auditors from U.S. states and territories with the National Association of Attorneys General, calls on companies, including Microsoft, OpenAI, Google and 10 other large AI companies, to implement a variety of new internal safeguards to protect users. The letter also included Anthropic, Apple, Chai AI, Character Technologies, Luka, Meta, Nomi AI, Perplexity AI, Replika, and xAI.

The letter comes as a battle erupts between state and federal governments over AI regulation.

These safeguards include transparent third-party audits of extensive language models that look for signs of delusions or flattering thoughts, and new incident reporting procedures designed to notify users when chatbots produce psychologically harmful output. These third parties, including academic and civil society groups, should be allowed to “evaluate the system before release without retaliation and publish their findings without prior approval from the company,” the letter says.

“GenAI has the potential to change the way the world works in positive ways. But it also causes, and can cause, serious harm, especially to vulnerable populations,” the letter said, pointing to a number of well-known incidents in the past year where violence has been linked to excessive AI use, including suicides and murders. “In many of these incidents, the GenAI products produced sycophantic or delusional output that encouraged the user’s delusions or assured the user that they were not delusional.”

The AG also suggests that companies treat mental health incidents in the same way that technology companies treat cybersecurity incidents, with clear and transparent incident reporting policies and procedures.

Companies should develop and publish “a timeline for detecting and responding to sycophantic or delusional output,” the letter said. Similar to how they currently respond to data breaches, companies should “promptly, clearly, and directly notify users if they have been exposed to potentially harmful flattery or delusional output,” the letter says.

tech crunch event

san francisco
|
October 13-15, 2026

Another question is that companies develop “reasonable and appropriate safety tests” for GenAI models to “ensure that the models do not produce potentially harmful flattery or delusional output.” These tests should be conducted before the model is made available to the public, it added.

TechCrunch was unable to reach Google, Microsoft, and OpenAI for comment before publication. This article will be updated if we receive a response from the company.

Tech companies developing AI are far more well received at the federal level.

The Trump administration has been unashamedly pro-AI, with multiple attempts over the past year to pass a nationwide moratorium on state-level AI regulations. So far, these efforts have failed, in part because of pressure from state authorities.

Undeterred, President Trump announced on Monday that he plans to pass an executive order next week that would limit states’ ability to regulate AI. In a post on Truth Social, the president said he wants the CEO to stop AI from being “destroyed in its infancy.”


Source link

#Aceleradoras #CapitalRiesgo #EcosistemaStartup #Emprendimiento #InnovaciónEmpresarial #Startups
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleCastilla-La Mancha Ignites Innovation: fiveclmsummit Redefines Tech Future
Next Article Google’s answer to the AI ​​arms race — Promote the man behind data center technology
user
  • Website

Related Posts

Resolve AI, a startup led by former Splunk executives, reaches $1 billion Series A valuation

December 20, 2025

Establish a venture-backable company in a highly regulated field

December 19, 2025

Cursor continues acquisition spree with deal with Graphite

December 19, 2025
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

Resolve AI, a startup led by former Splunk executives, reaches $1 billion Series A valuation

Establish a venture-backable company in a highly regulated field

Cursor continues acquisition spree with deal with Graphite

Elon Musk’s $56 billion Tesla pay package reinstated by Delaware Supreme Court

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Castilla-La Mancha Ignites Innovation: fiveclmsummit Redefines Tech Future

Local Power, Health Innovation: Alcolea de Calatrava Boosts FiveCLM PoC with Community Engagement

The Future of Digital Twins in Healthcare: From Virtual Replicas to Personalized Medical Models

Human Digital Twins: The Next Tech Frontier Set to Transform Healthcare and Beyond

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2025 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.