Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

Claude Opus 4.6 discovers over 500 high-severity flaws across major open source libraries

Elon Musk is serious about orbiting data centers

NASA astronauts can now bring cell phones on missions to the moon

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » AI is getting better at generating faces, but it can be trained to spot fakes
Science

AI is getting better at generating faces, but it can be trained to spot fakes

userBy userDecember 27, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

Facial images generated by artificial intelligence (AI) are so realistic that it is only by chance that even an elite group of “super-recognizers” with extremely powerful facial processing abilities can detect fake faces.

People with typical cognitive abilities are worse than chance, most of the time assuming that AI-generated faces are real.

This is according to a study published in the Royal Society Open Science journal on November 12th. However, the study also found that just five minutes of training on common AI rendering errors can significantly improve an individual’s ability to spot fakes.

you may like

“I think it’s encouraging that a very short training procedure like ours improved the performance of both groups so much,” the study’s lead author Katie Gray, an associate professor of psychology at the University of Reading in the UK, told Live Science.

Remarkably, this training improved the accuracy of super-recognizers and typical recognizers by the same amount, Gray said. Since the superrecognizer is good at finding fake faces at baseline, this suggests that the superrecognizer relies on a different set of cues to identify fake faces, rather than simply rendering errors.

Gray hopes that in the future, scientists will be able to take advantage of Super Recognizer’s enhanced detection skills to better discover AI-generated images.

“To most effectively detect synthetic faces, AI detection algorithms could potentially be used in a human-involved approach, where humans are trained SRs. [super recognizer]” the authors wrote in their study.

Get the world’s most fascinating discoveries delivered straight to your inbox.

Deepfake detection

AI-generated images have come under an onslaught online in recent years. Deepfake faces are created using a two-stage AI algorithm called a generative adversarial network. First, a fake image is generated based on a real-world image and then vetted by a discriminator that determines whether the resulting image is real or fake. Through repetition, the fake image becomes realistic enough to pass the discriminator.

These algorithms have now been refined to the point where people can be fooled into thinking that fake faces are more “real” than real faces, a phenomenon known as “hyperrealism.”

As a result, researchers are now trying to design training regimens that can improve the AI’s ability to detect individuals’ faces. These trainings point out common rendering errors in AI-generated faces, such as faces with middle teeth, odd-looking hairlines, and unnatural skin textures. It also highlights that fake faces tend to be more proportional than real faces.

you may like

In theory, so-called super-recognizers should be better able to spot fakes than the average person. These super-recognizers are individuals who excel at facial perception and recognition tasks and may be shown two photographs of unfamiliar people and asked to identify whether they are the same person. However, so far, few studies have investigated the ability of superrecognizers to detect false faces and whether their performance can be improved with training.

To fill this gap, Gray and her team ran a series of online experiments that compared the performance of a group of super-recognizers to general recognizers. The super recognizer was recruited from the Greenwich Face and Voice Recognition Laboratory’s volunteer database. They performed in the top 2% in tasks where they were shown unfamiliar faces and had to remember them.

In the first experiment, images of faces, either real or computer-generated, were displayed on a screen. Participants were given 10 seconds to judge whether the face was real or not. The super recognizer’s performance was the same as when it guessed randomly, detecting only 41% of the AI’s faces. Common recognition engines could only correctly identify about 30% of fakes.

Each cohort also differed in how often they thought real faces were fake. This occurred in 39% of cases with super-recognizers and in approximately 46% of cases with general recognizers.

The next experiment was similar, but new participants joined and underwent a 5-minute training session where they were shown examples of AI-generated facial errors. It was then tested on 10 faces to provide real-time feedback on its fake detection accuracy. The final stage of training was a summary of rendering errors to watch out for. Participants then repeated the original task of the first experiment.

Training significantly improved detection accuracy, with the super recognizer detecting 64% of fake faces and the general recognizer noticing 51%. The rate at which each group inaccurately called real faces as fake was about the same as in the first experiment, with the super and standard recognizers rating real faces as “not real” in 37% and 49% of cases, respectively.

Trained participants tended to take longer to scrutinize images than untrained participants, approximately 1.9 seconds slower for the common recognizer and 1.2 seconds slower for the super recognizer. Gray said this is an important message for people trying to decide whether a face they see is real or fake.

However, this test was conducted immediately after training, so it is unclear how long the effects last.

“This training has not been retested and cannot be considered a sustainable and effective intervention,” Meike Ramon, a professor of applied data science and an expert in facial processing at the Bern University of Applied Sciences in Switzerland, wrote in a pre-print review of the study.

Ramon added that because the two experiments used different participants, it is unclear how much training would improve an individual’s detection skills. This requires testing the same people twice, before and after training.


Source link

#Biotechnology #ClimateScience #Health #Science #ScientificAdvances #ScientificResearch
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleMeet the team that investigates when journalists and activists are hacked with government spyware
Next Article Startup funding in India to reach $11 billion in 2025 as investors become more selective
user
  • Website

Related Posts

How well can AI and humans work together? Scientists are looking to Dungeons & Dragons to find out.

February 5, 2026

Saltwater crocodiles crossed the Indian Ocean to the Seychelles before humans arrived and made them extinct.

February 5, 2026

Total lunar eclipse on March 3rd: When and where can you see the “blood moon” from the United States?

February 5, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

Claude Opus 4.6 discovers over 500 high-severity flaws across major open source libraries

Elon Musk is serious about orbiting data centers

NASA astronauts can now bring cell phones on missions to the moon

OpenAI launches a way for enterprises to build and manage AI agents

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Castilla-La Mancha Ignites Innovation: fiveclmsummit Redefines Tech Future

Local Power, Health Innovation: Alcolea de Calatrava Boosts FiveCLM PoC with Community Engagement

The Future of Digital Twins in Healthcare: From Virtual Replicas to Personalized Medical Models

Human Digital Twins: The Next Tech Frontier Set to Transform Healthcare and Beyond

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2026 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.