Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

According to OpenAI, more than 1 million people consult ChatGPT every week about suicide.

New claims of corporate espionage emerge from two highly regarded 401(k) management startups

COI Energy solves a challenge: Let businesses sell their unused power – catch it at TechCrunch Disrupt 2025

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » According to OpenAI, more than 1 million people consult ChatGPT every week about suicide.
Startups

According to OpenAI, more than 1 million people consult ChatGPT every week about suicide.

userBy userOctober 27, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

OpenAI on Monday released new data showing how many of ChatGPT’s users are struggling with mental health issues and consulting the AI ​​chatbot about it. The company says that in any given week, 0.15% of ChatGPT’s active users engage in “conversations that include clear signs of potential suicidal plans or intentions.” Considering ChatGPT has over 800 million weekly active users, this equates to over 1 million users per week.

The company says a similar proportion of users exhibit “increased levels of emotional attachment to ChatGPT,” and hundreds of thousands show signs of psychosis or mania in their weekly conversations with the AI ​​chatbot.

OpenAI says this type of conversation on ChatGPT is “extremely rare” and therefore difficult to measure. However, the company estimates that these issues affect hundreds of thousands of people each week.

OpenAI shared this information as part of a broader announcement about recent efforts to improve how models respond to users with mental health issues. The company claims its latest work on ChatGPT includes consulting with more than 170 mental health experts. OpenAI says these clinicians observed that the latest version of ChatGPT “responds better and more consistently than previous versions.”

In recent months, several articles have come to light about how AI chatbots can negatively impact users suffering from mental health issues. Researchers have previously found that AI chatbots can lead some users down paranoid rabbit holes, primarily by reinforcing dangerous beliefs through sycophantic behavior.

Addressing mental health issues in ChatGPT is quickly becoming an existential issue for OpenAI. The company is currently being sued by the parents of a 16-year-old boy who expressed suicidal thoughts on ChatGPT in the weeks leading up to his suicide. California and Delaware attorneys general have also warned OpenAI that it needs to protect young people who use its products, which could thwart the company’s reorganization plans.

Earlier this month, OpenAI CEO Sam Altman claimed in a post on X that the company was able to “mitigate serious mental health issues” in ChatGPT, without providing details. The data shared Monday appears to be evidence of that claim, but raises broader questions about how widespread the problem is. Nevertheless, Altman said OpenAI will ease some restrictions and also allow adult users to initiate sexual conversations with AI chatbots.

tech crunch event

san francisco
|
October 27-29, 2025

In an announcement on Monday, OpenAI claimed that the recently updated version of GPT-5 exhibits “desirable responses” to mental health issues, and responds approximately 65% ​​more than previous versions. In an evaluation that tested AI’s response to conversations about suicidal thoughts, OpenAI said the new GPT-5 model was 91% compliant with companies’ desired behaviors, compared to 77% compliant with the previous GPT-5 model.

The company also says that the latest version of GPT-5 can better preserve OpenAI’s protections during long conversations. OpenAI has previously warned that its security measures become less effective during long conversations.

In addition to these efforts, OpenAI says it is adding new assessments to measure some of the most serious mental health issues facing ChatGPT users. The company said baseline safety testing of the AI ​​model will include benchmarks for emotional dependence and non-suicidal mental health emergencies.

OpenAI recently rolled out more controls for parents of children using ChatGPT. The company said it is building an age prediction system that uses ChatGPT to automatically detect children and impose stricter protective measures.

Still, it’s unclear how long the mental health challenges surrounding ChatGPT will last. Although GPT-5 appears to be an improvement over previous AI models in terms of safety, there still appear to be some ChatGPT responses that OpenAI deems “undesirable.” OpenAI also makes older and less secure AI models, including GPT-4o, available to millions of paying subscribers.

If you or someone you know needs help, call the National Suicide Prevention Lifeline at 1-800-273-8255. You can also text HOME toll-free at 741-741. Text 988; or get 24-hour support from the Crisis Text Line. If you are outside the United States, visit the International Association for Suicide Prevention for a database of resources.


Source link

#Aceleradoras #CapitalRiesgo #EcosistemaStartup #Emprendimiento #InnovaciónEmpresarial #Startups
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleNew claims of corporate espionage emerge from two highly regarded 401(k) management startups
user
  • Website

Related Posts

New claims of corporate espionage emerge from two highly regarded 401(k) management startups

October 27, 2025

COI Energy solves a challenge: Let businesses sell their unused power – catch it at TechCrunch Disrupt 2025

October 27, 2025

Apple announces U.S. passport digital ID will be coming to Wallet ‘soon’

October 27, 2025
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

According to OpenAI, more than 1 million people consult ChatGPT every week about suicide.

New claims of corporate espionage emerge from two highly regarded 401(k) management startups

COI Energy solves a challenge: Let businesses sell their unused power – catch it at TechCrunch Disrupt 2025

Apple announces U.S. passport digital ID will be coming to Wallet ‘soon’

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Meet Your Digital Twin: Europe’s Cutting-Edge AI is Personalizing Medicine

TwinH: The AI Game-Changer for Faster, More Accessible Legal Services

Immortality is No Longer Science Fiction: TwinH’s AI Breakthrough Could Change Everything

The AI Revolution: Beyond Superintelligence – TwinH Leads the Charge in Personalized, Secure Digital Identities

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2025 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.