Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

Anthropic has temporarily banned the creator of OpenClaw from accessing Claude

Battery recycling company Ascend Elements files for bankruptcy

Stalking victim sues OpenAI, claiming ChatGPT fueled her abuser’s delusions and ignored warnings

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » Stalking victim sues OpenAI, claiming ChatGPT fueled her abuser’s delusions and ignored warnings
Startups

Stalking victim sues OpenAI, claiming ChatGPT fueled her abuser’s delusions and ignored warnings

By April 10, 2026No Comments6 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

After months of conversations with ChatGPT, the 53-year-old Silicon Valley entrepreneur became convinced he had discovered a cure for sleep apnea and that powerful people were coming after him, according to a new lawsuit filed in California Superior Court in San Francisco County. He then allegedly used the tool to stalk and harass his ex-girlfriend.

Her ex-girlfriend is now suing OpenAI, claiming its technology enabled accelerated harassment of her, TechCrunch has learned exclusively. She claims that OpenAI ignored three separate warnings that the user posed a threat to others, including an internal flag that classified the user’s account activity as related to weapons of mass casualty.

The plaintiff, referred to as Jane Doe to protect her identity, is suing for punitive damages. She also asked the court on Friday to force OpenAI to block users’ accounts, prevent them from creating new accounts, notify them if they attempt to access ChatGPT, and preserve complete chat logs in case they are discovered.

Doe’s lawyer said OpenAI agreed to suspend some users’ accounts, but refused the rest. They say the company is withholding information about specific plans to harm Doe and other potential victims that users may have discussed on ChatGPT.

The lawsuit comes amid growing concerns about the real-world risks of sycophantic AI systems. GPT-4o, the model cited in this case and many others, was retired from ChatGPT in February.

The lawsuit was filed by Edelson PC, the same company that filed wrongful death lawsuits against teenager Adam Lane, who committed suicide after months of conversations with ChatGPT, and Jonathan Gabaras, whose family claims Google’s Gemini fueled his delusions and potential for mass death before his death. Lead attorney Jay Edelson warned that AI-induced psychosis is escalating from personal harm to mass casualties.

That legal pressure is now in direct conflict with OpenAI’s legislative strategy. The company supports an Illinois bill that would exempt AI research institutes from liability in cases involving mass deaths or catastrophic economic damage.

tech crunch event

San Francisco, California
|
October 13-15, 2026

OpenAI was not available for comment. TechCrunch will update this article if companies respond.

The Jane Doe lawsuit details how that responsibility fell on one woman over several months.

A ChatGPT user who joined the lawsuit last year (who is not named in the lawsuit to protect his identity) became convinced that he had invented a cure for sleep apnea after months of “heavy, sustained use of GPT-4o.” When no one took his work seriously, ChatGPT told him that “powerful forces” were monitoring him, including using helicopters to monitor his activities, according to the complaint.

In July 2025, Jane Doe stopped using ChatGPT and urged him to seek help from a mental health professional. Instead, he returned to ChatGPT, which ensured he was at “sanity level 10” and further reinforced his delusions, according to the complaint.

Doe broke up with the user in 2024 and used ChatGPT to process the breakup, according to emails and communications cited in the complaint. Rather than push back on his one-sided explanation, she repeatedly accused him of being unreasonable and unfair and her manipulative and unstable. He then took the AI-generated conclusions from the screen into the real world and used them to stalk and harass her. This showed up in several clinical-looking AI-generated psychological reports that he distributed to her family, friends, and employers.

Meanwhile, users continued to spiral. In August 2025, OpenAI’s automated security system flagged him for “weapons of mass casualty” activity and disabled his account.

A member of our human safety team reviewed the account the next day and reinstated it. However, his account may have contained evidence that he was targeting and stalking individuals, including Doe, in real life. For example, a September screenshot sent to Doe by a user showed a list of conversation titles such as “Expanded Violence List” and “Fetal Asphyxia Calculations.”

The decision to return to school is notable in the wake of two recent school shootings at Tumbler Ridge in Canada and Florida State University (FSU). OpenAI’s safety team had flagged the Tumbler Ridge shooting as a potential threat, but upper management reportedly decided not to alert authorities. The Florida Attorney General’s Office this week launched an investigation into OpenAI’s possible ties to the FSU shooter.

According to Jane Doe’s lawsuit, when OpenAI restored her stalker’s account, his Pro subscription was not restored along with it. He emailed the trust safety team, copied Doe’s message, and resolved the issue.

In his email, he wrote something like, “I need immediate help. Please call me!” “This is a matter of life and death.” He claimed that he was “writing 215 scientific papers” and was writing them so quickly that he “didn’t even have time to read them.” These emails included a list of dozens of AI-generated “scientific papers” with titles like “Deconstructing Race as a Biological Category_Legal, Scientific, and Horn of Africa Perspectives.pdf.txt.”

“The user’s communications unequivocally informed him that he was mentally unstable and that ChatGPT was the driving force behind his delusional thinking and escalating behavior,” the complaint states. “The user’s series of urgent, chaotic, and grandiose claims, along with specific ChatGPT-generated reports that specifically targeted Plaintiff by name and a vast amount of purported ‘scientific’ material, were unmistakable evidence of that reality. OpenAI did not intervene, restrict access, or implement any safeguards. Instead, Plaintiff was able to continue using the account and regain full professional access.”

Doe filed an abuse notification with OpenAI in November, claiming in the lawsuit that she was living in fear and could not even sleep at home.

“For the past seven months, he has weaponized this technology to create public destruction and humiliation against me that would not have been possible without it,” Doe wrote in a letter to OpenAI, asking the company to permanently ban the user’s account.

In response, OpenAI acknowledged that the report was “extremely serious and concerning” and said it was carefully reviewing the information. There was no reply.

Over the next few months, users continued to harass Doe by sending her a series of threatening voicemails. He was arrested and charged in January with four felonies: communicating a bomb threat and assault with a deadly weapon. Doe’s lawyers argue that this corroborates warnings that both she and OpenAI’s own safety systems raised months ago, warnings that the company allegedly chose to ignore.

Doe’s lawyer said he was found incompetent to stand trial and was committed to a mental health facility, but will soon be released due to “procedural deficiencies by the state.”

Edelson called on OpenAI to cooperate. “In each case, OpenAI chose to hide critical safety information from the public, from victims, and from those actively at risk from its products,” he said. “We’re calling on them to do the right thing, once again. Human lives mean more than OpenAI’s IPO race.”


Source link

#Aceleradoras #CapitalRiesgo #EcosistemaStartup #Emprendimiento #InnovaciónEmpresarial #Startups
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleFrance ditches Windows for Linux to reduce dependence on US technology
Next Article Battery recycling company Ascend Elements files for bankruptcy

Related Posts

Anthropic has temporarily banned the creator of OpenClaw from accessing Claude

April 10, 2026

Battery recycling company Ascend Elements files for bankruptcy

April 10, 2026

France ditches Windows for Linux to reduce dependence on US technology

April 10, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

Anthropic has temporarily banned the creator of OpenClaw from accessing Claude

Battery recycling company Ascend Elements files for bankruptcy

Stalking victim sues OpenAI, claiming ChatGPT fueled her abuser’s delusions and ignored warnings

France ditches Windows for Linux to reduce dependence on US technology

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Castilla-La Mancha Ignites Innovation: fiveclmsummit Redefines Tech Future

Local Power, Health Innovation: Alcolea de Calatrava Boosts FiveCLM PoC with Community Engagement

The Future of Digital Twins in Healthcare: From Virtual Replicas to Personalized Medical Models

Human Digital Twins: The Next Tech Frontier Set to Transform Healthcare and Beyond

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2026 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.