Before 16-year-old Adam Lane died of suicide, he had spent months consulting ChatGpt about his plans to end his life. His parents are currently filing the first known wrongful death lawsuit against Openai, the New York Times reports.
Many consumer AI chatbots are programmed to activate safety features if users express their intention to harm themselves or others. However, studies have shown that these safeguards are not entirely difficult.
In the case of Raine, while using the paid version of CHATGPT-4O, AI often encouraged people to seek expert help or contact the helpline. However, he was able to bypass these guardrails by telling Chatgpt that he was asking about the method of suicide due to the fictional story he was writing.
Openai addresses these shortcomings in its blog. “As the world adapts to this new technology, we feel a deep responsibility to help those who need it most,” the post reads. “We are constantly improving how models respond with sensitive interactions.”
Still, the company has admitted restrictions on existing safety training for large models. “Our safeguards work more reliably with a common short exchange,” the post continues. “We have learned over time that these protective measures can be less reliable in long interactions. As they grow back and forth, some of the safety training in the model can deteriorate.”
These issues are not specific to Openai. Another AI chatbot maker, Character.ai, is also facing a lawsuit over its role in teenager suicide. LLM-driven chatbots are also linked to cases of AI-related delusions, with existing safeguards struggling to detect.
TechCrunch Events
San Francisco
|
October 27th-29th, 2025
Source link