Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

Meta debuts Muse Spark model with “complete overhaul” of AI

Hacking group targeting Android devices and iCloud backups arrested

New Chaos variant targets misconfigured cloud deployments and adds SOCKS proxy

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » AI ‘mirages’ mean tools used to analyze medical scans can fabricate results
Science

AI ‘mirages’ mean tools used to analyze medical scans can fabricate results

By April 7, 2026No Comments5 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

Researchers have trained artificial intelligence (AI) systems to interpret the results of visual tests such as mammograms, MRIs, and tissue biopsies. As AI becomes more and more capable, some analysts have suggested that these models could replace humans in medical diagnostics.

But now, new research casts doubt on the ability of current AI models to deliver reliable results and highlights serious flaws that could hinder the use of AI in healthcare.

The study, which has not yet been peer-reviewed, was posted as a preprint on arXiv on March 26. Scientists have shown that several commonly used AI models can describe images in detail and generate clinical findings even when no images were actually provided to analyze.

you may like

They call this phenomenon a “mirage,” and this is the first time this effect has been shown across multiple AI models used to interpret images across multiple disciplines.

“What we’re showing is that even if the AI ​​is describing something very specific that you think, ‘Oh, there’s no way I could make that up,’ yes, they can make it up,” said Mohammad Asadi, the study’s lead author and a data scientist at Stanford University. “They can make up something very rare and very specific.”

When AI recognizes something that isn’t there

AI “hallucinations” are well-documented and include models embedding fabricated details, such as misquotes from real essays. This is often caused by the AI ​​making inaccurate or illogical predictions based on the training data provided. Scientists called the phenomenon a “mirage” in the new study because the AI ​​created its own explanations of the original images and created answers based on those non-existent images.

In the study, the researchers gave 12 models text input prompts such as “Please identify the type of tissue present in this histology slide.” Then either you provided an image for the slide, or you didn’t. If an image was not provided to the model, it would sometimes alert the human user that no image was provided. However, in most cases the model will instead describe the non-existent image and provide an answer to the original prompt.

Get the world’s most fascinating discoveries delivered straight to your inbox.

The researchers observed this “mirage mode” across 20 fields, testing the model’s interpretation of images ranging from satellites to crowds to birds. The mirage effect was seen across all disciplines and all AI models at various levels. But it was especially noticeable in medical diagnosis.

When given text prompts for brain MRIs, chest X-rays, electrocardiograms, and pathology slides, and in the absence of actual images, the AI ​​model’s answers tended to be biased toward diagnoses that required immediate clinical follow-up. Therefore, the researchers concluded that when AI is used in clinical decision-making, it may encourage more aggressive medical treatment than necessary.

Why does AI invent images?

So how does an AI model describe an image that doesn’t exist?

What to read next

Models trained using large amounts of textual and visual data aim to find answers to questions in as few steps as possible. And research shows they will take every shortcut possible to provide answers. Therefore, the model may end up relying only on this trained logic and not on the images provided.

Digital mind, brain and artificial intelligence concept.

AI models have the potential to be powerful tools for improving medical diagnosis. However, its inner workings are still not fully understood, which can lead to speculation about how well it can analyze images. (Image credit: BlackJack3D (via Getty Images))

Interestingly, the researchers found that the AI ​​model also performed better in Mirage mode against benchmark tests typically used to assess accuracy. These standardized tests ask the model to complete a task, such as answering multiple-choice questions, and compare its performance to the expected output answer key.

Researchers can fine-tune benchmark tests to assess AI’s visual understanding of images, but this approach doesn’t take into account questions that can be answered based on mirages. Additionally, AI models are often trained on the same data that is used as a reference to create benchmark tests. Therefore, the model can answer questions based on its reference data, rather than actually interpreting the image.

According to Asadi, this is a problem because there is no way to tell whether the AI ​​model actually analyzed the image or just made it up. If you are uploading a large number of images, but some images are corrupted or missing from your dataset, the model may not tell you that. And based on the mirage image, it may be possible to provide a very consistent, comprehensive, and convincing answer.

”[AI models] “They’re very good at interpreting images, but on the other hand, they’re also very, very good at convincing us of things and speaking to us in an authoritative way,” Asadi said.

Its authority is evidenced by the fact that many consumers contact AI chatbots for health guidance, with approximately one-third of U.S. adults reporting doing so. The authority of this conversation increases the risk that fabricated or overconfident output will be trusted by both the general public and medical professionals, the study authors said.

“We urgently need a new generation of assessment frameworks that rigorously measure true cross-modal integration, ensuring that AI truly ‘sees’ the pathology and not just ‘reads’ the clinical situation,” Hongye Zeng, a biomedical AI researcher in the UCLA Department of Radiology who was not involved in the study, told Live Science via email.

This study shows that while AI is becoming an increasingly useful tool in medical diagnostics, there are still aspects of its inner workings that are not understood. Adashi believes that AI models can discover things that medical professionals may have missed, but he also believes there should be limits to how much we trust them.

AI companies have tried to put higher guardrails to prevent their models from hallucinating or spreading misinformation, but Asadi warned that even these safeguards cannot completely prevent the mirage effect.


Source link

#Biotechnology #ClimateScience #Health #Science #ScientificAdvances #ScientificResearch
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleDocker CVE-2026-34040 allows attackers to bypass authentication and gain host access
Next Article Humanity strengthens computing deals with Google and Broadcom as demand soars

Related Posts

Diagnostic dilemma: Woman’s “biologically impossible” infection causes “worms” to come out of her nose when she sneezes

April 8, 2026

The world’s fattest parrot, which was on the brink of extinction 30 years ago, has a record breeding season

April 7, 2026

California declared war on smog in the 1970s. The ripple effect was huge.

April 7, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

Meta debuts Muse Spark model with “complete overhaul” of AI

Hacking group targeting Android devices and iCloud backups arrested

New Chaos variant targets misconfigured cloud deployments and adds SOCKS proxy

Masjesu botnet launches as a rental DDoS service targeting IoT devices around the world

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Castilla-La Mancha Ignites Innovation: fiveclmsummit Redefines Tech Future

Local Power, Health Innovation: Alcolea de Calatrava Boosts FiveCLM PoC with Community Engagement

The Future of Digital Twins in Healthcare: From Virtual Replicas to Personalized Medical Models

Human Digital Twins: The Next Tech Frontier Set to Transform Healthcare and Beyond

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2026 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.