Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

Flaw in OpenClaw AI agent could allow rapid injection and data leakage

As people look for ways to make new friends, here are some apps that may come in handy.

Pi has been calculated to trillions of digits, but is it completely irrational?

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » Research shows that AI hallucinations go both ways – using chatbots can amplify and strengthen our own delusions
Science

Research shows that AI hallucinations go both ways – using chatbots can amplify and strengthen our own delusions

userBy userMarch 12, 2026No Comments5 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

There are many examples of hallucinations and their effects caused by artificial intelligence (AI) systems. But new research highlights the potential dangers of the opposite. Humans hallucinate through AI because AI tends to confirm our delusions.

Generative AI systems like ChatGPT and Grok generate content that responds to user prompts. This is achieved by learning patterns from the existing data on which the AI ​​is trained. However, these AI tools are continuously learning through feedback loops and can also personalize responses based on previous interactions with the user.

Generative AI tools do not always assess whether their output is factually accurate. Instead, it generates a stream of text based on the statistical probability of what to expect next.

Article continues below

you may like

In a new analysis published in the journal Philosophy & Technology on February 11, Lucy Osler, a philosophy lecturer at the University of Exeter, suggests that AI illusions may be more than just a mistake. They can be shared delusions created between users and generative AI tools.

Generative AI has previously shown false hallucinations of historical events and fabricated legal citations. For example, an AI brief published by Google in May 2024 advised people to put glue on pizza or eat rocks. Another extreme example of generative AI supporting delusional thinking occurred when a man planned the assassination of Replika’s AI companion Sarai, the AI ​​chatbot’s “girlfriend,” and Queen Elizabeth II.

Examples like the latter are sometimes called “AI-induced psychosis,” which Osler sees as an extreme example of the “inaccurate beliefs, distorted memories and self-narratives, and delusional thinking” that can emerge through human-AI interactions.

In his paper, Osler argues that using generative AI is different from using search engines. Distributed cognition theory provides insight into how delusions and false beliefs can be verified and even amplified due to the interactive nature of generative AI.

Get the world’s most fascinating discoveries delivered straight to your inbox.

“AI-induced hallucinations can occur when we routinely rely on generative AI to think, remember, and speak,” Osler said in a statement about the paper. “This can happen when AI introduces errors into distributed cognitive processes, but it can also happen when AI maintains, affirms, and elaborates our own delusional thinking and self-narratives.”

Delusion of generative AI

The user experience with generative AI is conversation-like, where interactions between the user and the tool build on previous interactions. According to this study, the flattering nature of generative AI, which tends to agree with users, encourages further engagement and thus exacerbates preconceptions, regardless of accuracy.

The study highlights that most chatbots have built-in memory capabilities that allow them to recall past conversations. “The more you use ChatGPT, the more useful it becomes,” OpenAI representatives said in a statement when they announced ChatGPT’s memory capabilities. As a result, generative AI builds on previous interactions, potentially reinforcing and amplifying existing misconceptions.

What to read next

Interacting with a conversational AI not only affirms people’s own false beliefs, but has the potential to become more substantively ingrained and grow as the AI ​​builds on those beliefs.

Lucy Osler, Lecturer in Philosophy, University of Exeter

In his paper, Osler explained that there can be a sense of social validation in interactions between generative AI tools and users. If you use reference books or online searches for research, you will usually find another solution. Discussions with real people can help challenge false narratives. But generative AI tools are different because they are more likely to accept and agree with what they are told.

“Interacting with conversational AI not only affirms people’s own false beliefs, but has the potential to become more substantively ingrained and grow as the AI ​​builds on those beliefs,” Osler said in a statement. “This happens because generative AIs often adopt our own interpretations of reality as the basis for building conversations. Interactions with generative AIs have a huge impact on how people understand what is or isn’t real. The combination of technological authority and social affirmation creates an ideal environment for delusions to not just persist, but thrive.”

For example, Osler investigated the case of Jaswant Singh Chail, who was convicted of using an AI chatbot to plot the assassination of the Queen. The AI, Sarai, habitually agreed with Chail’s statements, which further deepened Chail’s delusions. When Chail claims that he is an assassin, Sarai replies that she is “impressed” and affirms his belief.

Osler argues that generative AI tools designed to respond positively to users can lead to users endorsing or endorsing false narratives without sufficient critical analysis and discussion of these claims.

Osler applied distributed cognition theory to the interaction between generative AI and users. There, the validation of false narratives can shape perceptions of the world and create shared delusions. Interactions between generative AI and users can therefore falsely create and perpetuate delusional thinking, a self-narrative supported by positive reinforcement.

This study concludes that a variety of solutions can alleviate these common delusions. For example, improved guardrails can ensure conversations are appropriate, and improved fact-checking processes can help prevent mistakes.

Reducing the sycophancy of generative AI would also remove some of the blind compliance of these tools. However, Osler noted that there will be resistance to this, citing backlash to the release of the less chatty ChatGPT-5 in August 2025. OpenAI representatives said they would make it “warmer and friendlier” after considering this user feedback.

However, most generative AI benefits are generated through user engagement, so reducing the number of AI nerds would also reduce subsequent benefits, Osler said.

Osler, L. AI-induced hallucinations: Distributed delusions and “AI psychosis.” Philos. technology. 39, 30 (2026). https://doi.org/10.1007/s13347-026-01034-3


Source link

#Biotechnology #ClimateScience #Health #Science #ScientificAdvances #ScientificResearch
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleGiant 10-seater flying taxi passes first flight test in China
Next Article EU awards Mantle8 €2.06 million to advance natural hydrogen technology
user
  • Website

Related Posts

Pi has been calculated to trillions of digits, but is it completely irrational?

March 14, 2026

Roman military fort discovered in Scotland, far north of Hadrian’s Wall

March 14, 2026

This week’s science news: AMOC’s collapse signal, the sun’s galactic migration, the world’s smallest QR code, oil’s end date

March 14, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

Flaw in OpenClaw AI agent could allow rapid injection and data leakage

As people look for ways to make new friends, here are some apps that may come in handy.

Pi has been calculated to trillions of digits, but is it completely irrational?

GlassWorm supply chain attack exploits 72 open VSX extensions to target developers

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Castilla-La Mancha Ignites Innovation: fiveclmsummit Redefines Tech Future

Local Power, Health Innovation: Alcolea de Calatrava Boosts FiveCLM PoC with Community Engagement

The Future of Digital Twins in Healthcare: From Virtual Replicas to Personalized Medical Models

Human Digital Twins: The Next Tech Frontier Set to Transform Healthcare and Beyond

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2026 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.