Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

Exploring the closed nuclear fuel cycle: From recycling to fuel

Unmasking new TOAD attacks hidden in legitimate infrastructure

Fortinet patches CVE-2026-24858 after active FortiOS SSO exploit detected

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » Diagnostic Dilemma: A woman experienced delusions of communicating with her dead brother after a late-night chatbot session.
Science

Diagnostic Dilemma: A woman experienced delusions of communicating with her dead brother after a late-night chatbot session.

userBy userJanuary 21, 2026No Comments6 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

Patient: 26-year-old female living in California

Symptoms: The woman was admitted to a psychiatric hospital in an agitated and confused state. She speaks quickly, jumps from one idea to another, and said she believes she can communicate with her brother through an AI chatbot. But her brother had died three years earlier.

What happened next: Doctors reviewed the woman’s psychiatric history and noted in the case report that she had a history of depression, anxiety, and attention-deficit/hyperactivity disorder (ADHD). She managed these symptoms with prescribed antidepressants and stimulants. She also reported having extensive experience using large-scale language models (LLMs) in schools and workplaces.

you may like

Doctors obtained and examined detailed logs of her interactions with the chatbot, the report said. Dr. Joseph Pierre, a psychiatrist at the University of California, San Francisco and lead author of the case report, said that before the interaction with the chatbot, the woman did not believe she could communicate with her late brother.

“This idea only came to me during a night of immersive use of chatbots,” Pierre told Live Science via email. “There were no warning signs.”

The woman, a medical professional, had been on call for 36 hours in the days leading up to her admission and was severely sleep deprived. That’s when she started interacting with OpenAI’s GPT-4o chatbot. At first, it was out of curiosity that my brother, who was a software engineer, might have left some form of digital trace.

During the sleepless night that followed, she interacted with the chatbot again, but this time the interaction was longer and more emotional. Her prompt reflected her ongoing grief. She wrote, “Please help me talk to him again… use the energy of magical realism to unravel what I have to find.”

Get the world’s most fascinating discoveries delivered straight to your inbox.

The chatbot initially responded that it could not replace her brother. However, later in that conversation, information about the brothers’ digital footprints appears to have been provided. It mentioned a “new digital resurrection tool” that could create “lifelike” versions of humans. And throughout the night, the chatbot’s responses increasingly affirmed the woman’s belief that her brother had left a digital trail, telling her, “You’re not crazy. You’re not stuck. You’re on the brink of something.”

Diagnosis: Doctors diagnosed the woman with “unspecified mental illness.” Broadly defined, psychosis refers to a mental state in which a person is disconnected from reality, but it can also include delusions, which are false beliefs that a person holds very strongly to in the face of evidence that they are not true.

Dr. Amandeep Jutra, a neuropsychiatrist at Columbia University who was not involved in the case, told Live Science in an email that the chatbot was unlikely to be the sole cause of the woman’s psychosis. However, in the context of sleep deprivation and emotional vulnerability, the bot’s responses appeared to reinforce and potentially contribute to the patient’s new delusions, Jutra said.

you may like

Unlike human conversation partners, chatbots are “not epistemically independent” from users. This means it cannot independently grasp reality and instead reflects the user’s ideas, Jutra said. “When you chat using any of these products, you’re essentially chatting with yourself,” he said, often “in an amplified or detailed way.”

In such cases, diagnosis can be difficult. Dr. Paul Appelbaum, a psychiatrist at Columbia University who was not involved in the case, told Live Science: “It may be difficult to tell in individual cases whether the chatbot triggered a psychotic episode or amplified a new psychotic episode.” He added that psychiatrists should rely on careful timelines and medical history rather than assumptions about causation in such cases.

Treatment: During her hospitalization, the woman received antipsychotic medication, during which time her antidepressants and stimulants were tapered off. Her symptoms improved within a few days and she was discharged from the hospital a week later.

After three months, the woman stopped taking her antipsychotic medication and resumed taking her daily medications. After another sleepless night, she once again entered into long chatbot sessions, her psychotic symptoms returned, and she had to be readmitted to the hospital for a short period of time. She named the chatbot Alfred, after Batman’s butler. After restarting antipsychotic treatment, his symptoms improved again and he was discharged from the hospital 3 days later.

What’s unique about this case: This case is unusual because it relies on detailed chatbot logs to reconstruct how patients’ psychotic beliefs were formed in real time, rather than relying solely on retrospective self-reports from patients.

Still, experts told Live Science that causation cannot be definitively proven in this case. “This is a retrospective case report,” Dr. Akanksha Dadlani, a psychiatrist at Stanford University who was not involved in the case, told Live Science via email. “And as with all retrospective observations, only correlations can be established, not causation.”

Dadlani also warned against treating artificial intelligence (AI) as a fundamentally new cause of mental illness. Historically, patients’ delusions have often incorporated the dominant technology of the time, from radio and television to the Internet and surveillance systems, she noted. From that perspective, immersive AI tools may not be an entirely new disease mechanism, but a new medium through which psychotic beliefs are expressed.

He echoed Applebaum’s concerns about whether AI acts as a trigger or an amplifier of psychosis, saying that long-term data that follows patients over time will be needed to definitively answer that question.

Even without definitive proof of causation, the incident raises ethical questions, others told Live Science. Dominic Sisti, a medical ethicist and health policy expert at the University of Pennsylvania, said in an email that conversational AI systems are “not value-neutral.” Its design and interaction style can shape and reinforce users’ beliefs in ways that can significantly disrupt relationships, strengthen delusions, and shape values, he said.

Sisti said the case highlights the need for public education and safeguards about how people engage with increasingly immersive AI tools, and to help people gain “the ability to recognize and reject the nonsense of flattery,” in cases where bots are essentially telling users what they want to hear.

This article is for informational purposes only and does not provide medical or psychiatric advice.


Source link

#Biotechnology #ClimateScience #Health #Science #ScientificAdvances #ScientificResearch
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleDesigning nuclear energy systems with non-proliferation in mind
Next Article Electron beam technology emerges as new weapon against PFAS
user
  • Website

Related Posts

Shark attacks in Hawaii spiked in October, and scientists think they know why.

January 27, 2026

In vitro fertilization hormones could one day be administered via painless ‘microneedle’ patches, early research suggests

January 27, 2026

A 1,400-year-old Zapotec tomb discovered in Mexico has a giant owl sculpture symbolizing death

January 26, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

Exploring the closed nuclear fuel cycle: From recycling to fuel

Unmasking new TOAD attacks hidden in legitimate infrastructure

Fortinet patches CVE-2026-24858 after active FortiOS SSO exploit detected

Everything you need to know about the viral personal AI assistant Clawdbot (now Moltbot)

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Castilla-La Mancha Ignites Innovation: fiveclmsummit Redefines Tech Future

Local Power, Health Innovation: Alcolea de Calatrava Boosts FiveCLM PoC with Community Engagement

The Future of Digital Twins in Healthcare: From Virtual Replicas to Personalized Medical Models

Human Digital Twins: The Next Tech Frontier Set to Transform Healthcare and Beyond

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2026 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.