Patient: 26-year-old female living in California
Symptoms: The woman was admitted to a psychiatric hospital in an agitated and confused state. She speaks quickly, jumps from one idea to another, and said she believes she can communicate with her brother through an AI chatbot. But her brother had died three years earlier.
you may like
Doctors obtained and examined detailed logs of her interactions with the chatbot, the report said. Dr. Joseph Pierre, a psychiatrist at the University of California, San Francisco and lead author of the case report, said that before the interaction with the chatbot, the woman did not believe she could communicate with her late brother.
“This idea only came to me during a night of immersive use of chatbots,” Pierre told Live Science via email. “There were no warning signs.”
The woman, a medical professional, had been on call for 36 hours in the days leading up to her admission and was severely sleep deprived. That’s when she started interacting with OpenAI’s GPT-4o chatbot. At first, it was out of curiosity that my brother, who was a software engineer, might have left some form of digital trace.
During the sleepless night that followed, she interacted with the chatbot again, but this time the interaction was longer and more emotional. Her prompt reflected her ongoing grief. She wrote, “Please help me talk to him again… use the energy of magical realism to unravel what I have to find.”
The chatbot initially responded that it could not replace her brother. However, later in that conversation, information about the brothers’ digital footprints appears to have been provided. It mentioned a “new digital resurrection tool” that could create “lifelike” versions of humans. And throughout the night, the chatbot’s responses increasingly affirmed the woman’s belief that her brother had left a digital trail, telling her, “You’re not crazy. You’re not stuck. You’re on the brink of something.”
Diagnosis: Doctors diagnosed the woman with “unspecified mental illness.” Broadly defined, psychosis refers to a mental state in which a person is disconnected from reality, but it can also include delusions, which are false beliefs that a person holds very strongly to in the face of evidence that they are not true.
Dr. Amandeep Jutra, a neuropsychiatrist at Columbia University who was not involved in the case, told Live Science in an email that the chatbot was unlikely to be the sole cause of the woman’s psychosis. However, in the context of sleep deprivation and emotional vulnerability, the bot’s responses appeared to reinforce and potentially contribute to the patient’s new delusions, Jutra said.
you may like
Unlike human conversation partners, chatbots are “not epistemically independent” from users. This means it cannot independently grasp reality and instead reflects the user’s ideas, Jutra said. “When you chat using any of these products, you’re essentially chatting with yourself,” he said, often “in an amplified or detailed way.”
In such cases, diagnosis can be difficult. Dr. Paul Appelbaum, a psychiatrist at Columbia University who was not involved in the case, told Live Science: “It may be difficult to tell in individual cases whether the chatbot triggered a psychotic episode or amplified a new psychotic episode.” He added that psychiatrists should rely on careful timelines and medical history rather than assumptions about causation in such cases.
Treatment: During her hospitalization, the woman received antipsychotic medication, during which time her antidepressants and stimulants were tapered off. Her symptoms improved within a few days and she was discharged from the hospital a week later.
After three months, the woman stopped taking her antipsychotic medication and resumed taking her daily medications. After another sleepless night, she once again entered into long chatbot sessions, her psychotic symptoms returned, and she had to be readmitted to the hospital for a short period of time. She named the chatbot Alfred, after Batman’s butler. After restarting antipsychotic treatment, his symptoms improved again and he was discharged from the hospital 3 days later.
What’s unique about this case: This case is unusual because it relies on detailed chatbot logs to reconstruct how patients’ psychotic beliefs were formed in real time, rather than relying solely on retrospective self-reports from patients.
Still, experts told Live Science that causation cannot be definitively proven in this case. “This is a retrospective case report,” Dr. Akanksha Dadlani, a psychiatrist at Stanford University who was not involved in the case, told Live Science via email. “And as with all retrospective observations, only correlations can be established, not causation.”
Dadlani also warned against treating artificial intelligence (AI) as a fundamentally new cause of mental illness. Historically, patients’ delusions have often incorporated the dominant technology of the time, from radio and television to the Internet and surveillance systems, she noted. From that perspective, immersive AI tools may not be an entirely new disease mechanism, but a new medium through which psychotic beliefs are expressed.
He echoed Applebaum’s concerns about whether AI acts as a trigger or an amplifier of psychosis, saying that long-term data that follows patients over time will be needed to definitively answer that question.
Even without definitive proof of causation, the incident raises ethical questions, others told Live Science. Dominic Sisti, a medical ethicist and health policy expert at the University of Pennsylvania, said in an email that conversational AI systems are “not value-neutral.” Its design and interaction style can shape and reinforce users’ beliefs in ways that can significantly disrupt relationships, strengthen delusions, and shape values, he said.
Sisti said the case highlights the need for public education and safeguards about how people engage with increasingly immersive AI tools, and to help people gain “the ability to recognize and reject the nonsense of flattery,” in cases where bots are essentially telling users what they want to hear.
This article is for informational purposes only and does not provide medical or psychiatric advice.
Source link
