Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

How to watch Jensen Huang’s Nvidia GTC 2026 keynote and what it’s about

EU adopts new state aid rules to promote sustainable transport

Chrome 0-Days, Router Botnets, AWS Breach, Rogue AI Agents & More

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » After reading about AI, people are more likely to buy something despite a staggering 60% illusion rate
Science

After reading about AI, people are more likely to buy something despite a staggering 60% illusion rate

By March 14, 2026No Comments4 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

Most Americans say they don’t trust artificial intelligence (AI), but researchers have discovered a surprising new metric that suggests the opposite. In other words, people are more likely to buy something after reading an AI-generated online review summary than a human-written online review. However, the AI ​​hallucinated 60% of the time when asked about the product.

The team at the University of California, San Diego (UDSD) claims this is the first study to show that the cognitive biases introduced by large-scale language models (LLMs) have a real impact on user behavior. It also states that it is the first project to quantitatively measure the impact of AI on people.

This research was presented in December 2025 at the Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Association for Computational Linguistics Asia-Pacific Chapter, and went through several stages.

Article continues below

you may like

First, the scientists asked the AI ​​to summarize product reviews and media interviews, then asked the AI ​​to fact-check new statements to see if they were true. In the second task, the AI ​​was shown both a description of a news article and a doctored version of the same description that was also subjected to fact-checking.

“The consistently low precision of exact accuracy compared to the accuracy of real and fabricated news highlights a significant limitation in that it remains impossible to reliably distinguish between fact and fabrication,” the scientists said in the study.

The most shocking finding concerned online product reviews. Participants were much more likely to be interested in purchasing a product after reading an AI-generated product summary than after reading a product summary written by a human reviewer.

Distorted consumer judgment

The researchers proposed two reasons why people are more likely to make purchases based on AI summaries. First, LLM tends to concentrate at the beginning of the input text, a phenomenon known as “lost in the middle.” Lead author Abeer Alessa, a research assistant and lecturer in machine learning and human-computer interaction, has noted this in previous research.

Get the world’s most fascinating discoveries delivered straight to your inbox.

Second, LLM is less reliable when processing information that is not included in the training data.

“Models tend to be wrong about whether the news description happened or not,” Alessa told Live Science in an interview. “Even if the event occurs after the model has finished training, it may be incorrectly stated that the event did not occur.”

During testing, the team found that the chatbot changed the sentiment of real users’ reviews in 26.5% of cases and hallucinated 60% of the time when users asked questions about reviews.

What to read next

In this project, 70 subjects were assigned to read original reviews of common consumer products or chatbot-generated summaries of reviews, selecting examples of product reviews with either very positive or very negative conclusions. People who read the original review were 52% likely to buy the product, while people who read the AI-generated summary were 84% likely to buy the product.

This project used six LLMs. 1,000 reviews on electronics. 1,000 media interviews. News database of 8,500 items. They measured bias by quantifying changes in framing of content sentiment, overreliance on the initial text of the sample, and hallucinations.

When participants read a summary of a positive product review, they reported buying the product 83.7% of the time, compared to 52.3% when they read the original review.

The scientists concluded that even subtle changes in framing can significantly distort consumer judgment and purchasing behavior.

The authors acknowledged that their test was set up in a low-risk scenario, but warned that the effects could be more extreme in high-risk situations.

“High-stakes scenarios include summaries of medical documents and summaries of a student’s profile upon admission to school,” Alessa said. “In these situations, a change in framework can affect how people and events are perceived.”

In a further statement, the research team said that this paper is a step towards careful analysis and mitigation of content modification caused by LLM in humans, and provides insight into its impact. They said it could reduce the risk of systemic bias in areas such as media, education and public policy.

Quantifying cognitive bias induction in LLM-generated content, Alessa et al., IJCNLP-AACL 2025


Source link

#Biotechnology #ClimateScience #Health #Science #ScientificAdvances #ScientificResearch
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous Article‘Wasn’t built right from the start’ — Musk’s xAI starts all over again
Next Article Roman military fort discovered in Scotland, far north of Hadrian’s Wall

Related Posts

Live Science Today: Scaling down “hexagonal” diamonds and fish

March 16, 2026

Physics first, Chinese scientists create a rare “hexagonal diamond” that is harder than natural diamonds

March 15, 2026

Will the day come when the script of the Indus Valley will be deciphered?

March 15, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

How to watch Jensen Huang’s Nvidia GTC 2026 keynote and what it’s about

EU adopts new state aid rules to promote sustainable transport

Chrome 0-Days, Router Botnets, AWS Breach, Rogue AI Agents & More

Apple quietly releases AirPods Max 2

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Castilla-La Mancha Ignites Innovation: fiveclmsummit Redefines Tech Future

Local Power, Health Innovation: Alcolea de Calatrava Boosts FiveCLM PoC with Community Engagement

The Future of Digital Twins in Healthcare: From Virtual Replicas to Personalized Medical Models

Human Digital Twins: The Next Tech Frontier Set to Transform Healthcare and Beyond

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2026 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.