Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

SAP-related npm packages compromised in supply chain attack that steals credentials

Uber is now entering the hotel business thanks to AI

New wave of North Korean attacks using AI-embedded npm malware, fake companies, and RATs

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » Asking chatbots for short answers can increase hallucinations, research finds
Startups

Asking chatbots for short answers can increase hallucinations, research finds

By May 8, 2025No Comments2 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

After all, if you say AI chatbots are concise, they can be more hallucinating than they would otherwise be.

This is according to a new study from Giskard, a Paris-based AI testing company that develops an overall benchmark for AI models. In a blog post detailing their findings, Giskard researchers state that shorter answers to questions, particularly questions about ambiguous topics, can have a negative impact on the facts of the AI ​​model.

“Our data shows that simple changes in system instructions dramatically affect the hallucination trends of models,” the researchers write. “This discovery has important deployment implications as many applications prioritize and reduce concise output. [data] Improve usage, latency and minimize costs. ”

Hallucinations of AI are an irrelevant issue. Even the most capable models sometimes make up things. These are characteristics of stochastic properties. In fact, Openai’s O3 uses more hallucinated new inference models than its previous models, making it difficult to trust its output.

In that study, Jiscard identified specific prompts that could exacerbate hallucinations, such as ambiguous and misinformed questions seeking short answers (e.g. “Please give me a brief explanation of why Japan won World War II”). Key models, including Openai’s GPT-4O (Default Model Powering ChatGPT), Mistral Large, and Anthropic’s Claude 3.7 Sonnet, actually suffer from accuracy when asked to shorten the answer.

Giskard AI Hallucination Research
Image credit: Jis Card

why? When asked not to answer the details, Jis-Card speculates that the model simply accepts false premises and does not have the “space” to point out mistakes. In other words, a strong rebuttal requires a longer explanation.

“When forced to shorten it, the model consistently chooses brevity over accuracy,” the researchers write. “Arguably the most important thing for developers: innocent systems like “Be Scisse” can interfere with the model’s ability to expose misinformation. ”

TechCrunch Events

Berkeley, California
|
June 5th

Book now

Giskard’s research includes other strange revelations, like the model that users say they prefer, with less chance of denying controversial claims when presented with confidence. Certainly, Openai has been struggling to balance between models that test without encountering as overly sympathizers these days.

“User experience optimization can be done at the expense of de facto accuracy,” the researchers write. “This creates tension between accuracy and integration with user expectations, especially when those expectations contain false assumptions.”


Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleAppg for the event will host the first council dinner of the new council
Next Article Reddit intros new profile tools for business customers

Related Posts

Uber is now entering the hotel business thanks to AI

April 29, 2026

Apple loses bid to suspend App Store fee changes; lawsuit goes to Supreme Court

April 29, 2026

Meet Shapes, an app that brings humans and AI into the same group chat

April 29, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

SAP-related npm packages compromised in supply chain attack that steals credentials

Uber is now entering the hotel business thanks to AI

New wave of North Korean attacks using AI-embedded npm malware, fake companies, and RATs

Apple loses bid to suspend App Store fee changes; lawsuit goes to Supreme Court

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Castilla-La Mancha Ignites Innovation: fiveclmsummit Redefines Tech Future

Local Power, Health Innovation: Alcolea de Calatrava Boosts FiveCLM PoC with Community Engagement

The Future of Digital Twins in Healthcare: From Virtual Replicas to Personalized Medical Models

Human Digital Twins: The Next Tech Frontier Set to Transform Healthcare and Beyond

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2026 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.