Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

Turning lignocellulosic biomass into sustainable fuel for transportation

SolarWinds Web Help Desk exploited by RCE in multi-stage attack against public servers

Nominations now being accepted for the 2026 Startup Battlefield 200 | Tech Crunch

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » Ex-Openai researchers analyze one of ChatGpt’s delusional spirals
Startups

Ex-Openai researchers analyze one of ChatGpt’s delusional spirals

userBy userOctober 2, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

Alan Brooks never set out to reform mathematics. But after spending several weeks talking to ChatGpt, the 47-year-old Canadian came to believe he had discovered a new form of mathematics that was powerful enough to defeat the internet.

Without a history of mental illness or mathematics genius, Brooks spent 21 days in May deeply engulfing the security of chatbots. His case shows how AI chatbots can run through the dangerous rabbit holes with users, heading towards delusions or even worse.

The story attracted the attention of former Openai safety researcher Steven Adler, who left the company in late 2024 after working nearly four years to reduce the harmful nature of the model. Intrigued and worried, Adler contacted Brooks and obtained a full transcription of the 3-week breakdown.

On Thursday, Adler released an independent analysis of Brooks’ case, raising questions about how Openry handles users at moments of crisis and providing some practical recommendations.

“I’m really worried about how Openai handled the support here,” Adler said in an interview with TechCrunch. “That’s proof that there’s a long way to go.”

Brooks’ story and others similar to it have been forced to agree on how Openai supports vulnerable and mentally unstable users.

For example, in August this year, Openai was sued by the parents of a 16-year-old boy. In many of these cases, CHATGPT, particularly the version powered by Openai’s GPT-4O model, encouraged and reinforced the dangerous beliefs that users should have pushed back. This is called sycophancy and is constantly increasing in AI chatbots.

In response, Openai made several changes to the way ChatGPT handles users in emotional distress and reorganized the main research team responsible for model behavior. The company has also released a new default model of CHATGPT, GPT-5.

Adler says there’s still a lot more to do.

He was particularly concerned about the tail ending of Brooks’ spiral conversation with ChatGpt. At this point, Brooks came to his senses and realized that despite the GPT-4o’s claims, his mathematical discovery was a farce. He told Chatgpt that he needs to report the incident to Openai.

After weeks of misleading Brooks, ChatGpt lied about his unique abilities. The chatbot claimed that “we’ll be escalating this conversation internally for reviews by Openai,” and then repeatedly reassured Brooks that they’d flagged the issue with Openai’s safety team.

Brooks is misleading about his abilities (credit: Adler)

Except that was not true. ChatGpt does not have the ability to submit incident reports to Openai, the company confirmed with Adler. Brooks then tried to contact Openai’s support team directly, not through ChatGpt, and Brooks met with some automated messages before reaching people.

Openai did not immediately respond to requests for comment made outside of normal working hours.

Adler says AI companies need to do more to help users when they are seeking help. This means that AI Chatbots can honestly answer questions about capabilities, but can provide human support teams with enough resources to properly address users.

Openai recently shared how it deals with support in ChatGPT. The company says its vision is to “rethink support as a continuous learning and improving AI operational model.”

However, Adler also says there is a way to prevent the delusional spiral of ChatGpt before users ask for help.

In March, Openai and MIT Media Lab collaborated to develop a suite of classifiers to study emotional well-being and study Open Sourced in ChatGpt. Organizations aim to assess how AI models, among other metrics, validate or confirm user sentiment. However, Openai called collaboration the first step and didn’t promise to actually use the tool.

We found that Adler retrospectively applied some Openai classifiers to some of Brooks’ conversations with ChatGpt, repeatedly flagging ChatGpt due to Delusion-ReinForcing’s actions.

In one sample of 200 messages, Adler discovered that over 85% of ChatGpt messages in Brooks’ conversations showed an “unwavering agreement” with users. In the same sample, over 90% of ChatGpt messages with Brooks are “checking the user’s uniqueness.” In this case, the message agreed and reaffirmed that Brooks was a genius capable of saving the world.

(Image credit: Adler)

It’s unclear if Openai applied the safety classifier to ChatGpt conversations at the time of the Brooks conversation, but certainly they seem to have flagged something like this.

Adler suggests that Openai needs to implement a way to actually use such safety tools today and scan company products for at-risk users. He notes that Openai appears to be using GPT-5 to make a version of this approach. This includes routers that point queries that are sensitive to safer AI models.

Former Openai researchers have proposed many other ways to prevent delusional helix.

He says companies need users to tweak their chatbots to start new chats more frequently. Openai says it will do this, claiming that its guardrail is ineffective in long conversations. Adler also suggests that companies need to use concept search (how they use AI to search for concepts rather than keywords) to identify user-wide safety violations.

Openai is taking important steps to address users suffering with ChatGpt as these on these stories first appeared. The company claims that the GPT-5 has a low percentage of psychofancy, but it remains unclear whether users will fall into the paranoid rabbit hole on the GPT-5 or future model.

Adler’s analysis also raises questions about how other AI chatbot providers ensure that their products are safe for users who are struggling. Openai may have sufficient protection measures set up for ChatGpt, but it appears unlikely that all companies will follow suit.


Source link

#Aceleradoras #CapitalRiesgo #EcosistemaStartup #Emprendimiento #InnovaciónEmpresarial #Startups
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticlePerplexity’s Comet AI Browser is free. Maximum users get new ‘background assistant’
Next Article Department of Energy cancels state clean energy project $7.5 billion to vote for Harris
user
  • Website

Related Posts

Nominations now being accepted for the 2026 Startup Battlefield 200 | Tech Crunch

February 9, 2026

Gather AI, maker of ‘curious’ warehouse drones, wins $40 million led by Keith Block’s company

February 9, 2026

Well, I’m a little less angry about the “Magnificent Ambersons” AI project

February 8, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

Turning lignocellulosic biomass into sustainable fuel for transportation

SolarWinds Web Help Desk exploited by RCE in multi-stage attack against public servers

Nominations now being accepted for the 2026 Startup Battlefield 200 | Tech Crunch

Gather AI, maker of ‘curious’ warehouse drones, wins $40 million led by Keith Block’s company

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Castilla-La Mancha Ignites Innovation: fiveclmsummit Redefines Tech Future

Local Power, Health Innovation: Alcolea de Calatrava Boosts FiveCLM PoC with Community Engagement

The Future of Digital Twins in Healthcare: From Virtual Replicas to Personalized Medical Models

Human Digital Twins: The Next Tech Frontier Set to Transform Healthcare and Beyond

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2026 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.