Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

UKHSA revives Andi Biotic campaign to raise awareness of AMR

Peec AI raises $21M to help brands adapt as consumers ditch Google for ChatGPT

Google issues security fix for actively exploited zero-day vulnerability in Chrome V8

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » The more we use AI, the more likely we are to overestimate our own abilities.
Science

The more we use AI, the more likely we are to overestimate our own abilities.

userBy userNovember 17, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

When asked to evaluate how good we are at something, we tend to get it completely wrong. This is a universal human tendency, and its effects are strongest among those with lower levels of ability. This phenomenon, called the Dunning-Kruger effect after the psychologist who first studied it, means that people who are less good at a particular task become overconfident, while those who are more capable tend to underestimate their skills. It is often revealed through cognitive tests that include questions that assess attention, decision-making, judgment, and language.

But now scientists at Finland’s Aalto University (along with collaborators in Germany and Canada) have found that by using artificial intelligence (AI), the Dunning-Kruger effect can be largely eliminated, and in fact almost reversed.

Their research found that when using common chatbots to solve problems, everyone (regardless of skill level) tends to overly trust the quality of the answers, with the most experienced AI users being the most trusting. The research team published their findings in the February 2026 issue of Computers in Human Behavior.

you may like

As we all become more AI literate thanks to the proliferation of large-scale language models (LLMs), the researchers hoped that participants would not only become better at interacting with AI systems, but also better at judging their performance when using them. “Rather, our findings reveal that it is significantly impossible to accurately assess individual performance when using AI evenly across our sample,” report co-author Robin Welsh, a computer scientist at Aalto University, said in a statement.

flatten the curve

In the study, scientists gave 500 subjects a logical reasoning task on a law school entrance exam and allowed half of them to use the popular AI chatbot ChatGPT. Both groups were then asked questions about their AI literacy and how well they thought they were performing, and were promised additional rewards if they accurately assessed their performance.

The reasons behind the findings vary. Because AI users are typically satisfied with the answer to just one question or prompt and accept the answer without further checking or confirmation, they are engaging in what Welsh calls “cognitive offloading,” or asking questions with less self-reflection and approaching them in a more “shallow” way.

Reducing our involvement in our own reasoning, known as “metacognitive monitoring,” bypasses the normal feedback loops of critical thinking and reduces our ability to accurately measure our performance.

Get the world’s most fascinating discoveries delivered straight to your inbox.

What’s also clear is that, regardless of intelligence, we all overestimate our abilities when using AI, and the gap between high and low skilled users is narrowing. This study attributes this to the fact that LLM helps everyone improve their performance to some extent.

Although the researchers did not address this directly, the discovery also comes at a time when scientists are beginning to question whether LLM in general is too sycophantic. Aalto’s team warned that there are several potential implications as AI becomes more widespread.

First, overall metacognitive accuracy may be reduced. Relying on results without rigorously questioning them improves user performance, but the tradeoff is that their evaluation of how well they can handle the task decreases. Without reflecting on results, error-checking, and reasoning more deeply, we risk reducing our ability to reliably obtain information, the scientists said in the study.

Furthermore, the flattening Dunning-Kruger effect means that we will all continue to overestimate our abilities when using AI, and those with higher AI literacy will be more likely to do so, leading to an increased propensity for poor decision-making and skill decline.

One way research suggests to stop this decline is for the AI ​​itself to prompt users to ask more questions, and for developers to redirect answers to encourage reflection, literally asking questions like “How confident are you in this answer?” or “What did I miss?” or encourage further interaction through measures such as confidence scores.

The new research adds weight to the idea that AI training should include critical thinking as well as technical competency, as recently advocated by the Royal Society. “We…provide recommendations for the design of conversational AI systems that enhance metacognitive monitoring by allowing users to critically reflect on their performance,” the scientists said.


Source link

#Biotechnology #ClimateScience #Health #Science #ScientificAdvances #ScientificResearch
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleFortinet Exploited, China’s AI Hacks, PhaaS Empire Falls & More
Next Article Luminal raises $5.3 million to build a better GPU code framework
user
  • Website

Related Posts

Study finds that high-fiber diet may ‘rejuvenate’ cancer-fighting immune cells

November 17, 2025

‘Nothing but a nightmare’: A worker ant is tricked by a fake queen into killing his mother – and quickly seizes the throne

November 17, 2025

The gulf separating Africa and Asia continues to pull them apart 5 million years after scientists thought they had stopped.

November 17, 2025
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

UKHSA revives Andi Biotic campaign to raise awareness of AMR

Peec AI raises $21M to help brands adapt as consumers ditch Google for ChatGPT

Google issues security fix for actively exploited zero-day vulnerability in Chrome V8

A16z-backed super PAC targets Alex Boaz, sponsor of New York’s AI safety bill — says he will introduce the bill

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Meet Your Digital Twin: Europe’s Cutting-Edge AI is Personalizing Medicine

TwinH: The AI Game-Changer for Faster, More Accessible Legal Services

Immortality is No Longer Science Fiction: TwinH’s AI Breakthrough Could Change Everything

The AI Revolution: Beyond Superintelligence – TwinH Leads the Charge in Personalized, Secure Digital Identities

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2025 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.