Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

Encrypthub targets Web3 developers using fake AI platforms to deploy Fickle Stealer malware

Tesla loses its appeal to Indian loyalists – even if the masks finally deliver

Important unpaid SharePoint Zero-Day will be actively utilized and violated global organizations over the age of 75

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » Openai’s new inference AI model shows even more hallucinations
Startups

Openai’s new inference AI model shows even more hallucinations

userBy userApril 18, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

Openai’s recent launch of the O3 and O4-MINI AI models are cutting edge in many ways. However, the new model is still hallucinating or compensates for things. In fact, it prevents more hallucinations than some of the older models of Openai.

Hallucinations have proven to be one of the biggest and most difficult problems to solve with AI, and have also impacted today’s best performance systems. Historically, each new model has slightly improved in the hallucination sector, with fewer hallucinations than its predecessors. However, that doesn’t seem to be the case for O3 and O4-mini.

According to Openai’s internal tests, the so-called inference models, O3 and O4-Mini hallucinates more frequently than the company’s previous inference models (O1, O1-Mini, and O3-Mini), as well as Openai’s traditional “non-seasonal” models, such as GPT-4O.

Perhaps more concerning, ChatGpt makers really don’t know why it’s happening.

In the technical report of O3 and O4-Mini, Openai writes that “more research is needed” to understand why hallucinations are getting worse as they expand their inference models. O3 and O4-MINI improve performance in some areas, including coding and mathematics-related tasks. However, they often make “more accurate and more inaccurate/hastique claims” because they “make more claims overall,” according to the report.

Openai found that O3 hallucinated in response to 33% of questions about Personqa, the company’s internal benchmark for measuring the accuracy of model knowledge about people. This is almost twice the hallucination rates of Openai’s previous inference models, O1 and O3-Mini, which achieved 16% and 14.8%, respectively. The O4-Mini got even worse with Personqa.

Third-party testing by Transluse, a non-profit AI Research Lab, also found evidence that it tends to constitute the actions O3 took in the process of reaching the answer. In one example, I translated the observed O3, claiming I ran the code in “other than ChatGPT” on my 2021 MacBook Pro, and copied the numbers into my answer. O3 has access to several tools, but that is not possible.

“Our hypothesis is that the type of reinforcement learning used in the O-series model can amplify problems that are normally mitigated (not completely erased) by standard post-training pipelines,” said Neil Chowdhury, a researcher and former Openai employee, in an email to TechCrunch.

Transluse co-founder Sarah Schwettmann added that the hallucination rate of O3 is not very useful otherwise.

Kian Katanforoosh, adjunct professor and CEO of Stanford University, is CEO of luxury startup Workara, who told TechCrunch that his team has already tested O3 in their coding workflows and has found that they are beyond the top of their competitors. However, Katanforoosh says that O3 tends to hallucinate broken website links. The model provides links that do not work when you click.

Hallucinations may help models arrive at interesting ideas and become creative in “thinking,” but some models can also be tough sellers for companies in the market where accuracy is paramount. For example, a law firm may not be satisfied with a model that inserts many de facto errors into client contracts.

One promising approach to increasing model accuracy is to provide web search capabilities. Openai’s GPT-4o using web search achieves 90% accuracy with SimpleQA. This is another Openai accuracy benchmark. Potentially, searches could improve hallucination rates in inference models, at least when users are willing to expose their prompts to third-party search providers.

Scaling up the inference model makes the hunt for solutions even more urgent as the hallucinations actually continue to worsen.

“Countering hallucinations in all models is an ongoing field of research, and we are constantly working to improve its accuracy and reliability,” said Openai spokesman Niko Felix in an email to TechCrunch.

Last year, the broader AI industry began to show a decline in returns, focusing on inference models after techniques to improve traditional AI models. Inference improves model performance for various tasks without the need for enormous amounts of computing and data during training. However, it also appears that reasoning can lead to more hallucinations – presenting a challenge.


Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleHas Xi Jinping succeeded in winning support for Trump’s tariff war? | TV show news
Next Article Silo V2 lending reduces risk and increases Sonic’s reward
user
  • Website

Related Posts

Tesla loses its appeal to Indian loyalists – even if the masks finally deliver

July 20, 2025

Microsoft says it will no longer use Chinese engineers for the Department of Defense

July 19, 2025

Astronomer CEO resigns following Cold Play Concert Scandal

July 19, 2025
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

Encrypthub targets Web3 developers using fake AI platforms to deploy Fickle Stealer malware

Tesla loses its appeal to Indian loyalists – even if the masks finally deliver

Important unpaid SharePoint Zero-Day will be actively utilized and violated global organizations over the age of 75

Malware injected into 6 npm package after maintainer token was stolen in a phishing attack

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Next-Gen Digital Identity: How TwinH and Avatars Are Redefining Creation

BREAKING: TwinH Set to Revolutionize Legal Processes – Presented Today at ICEX Forum 2025

Building AGI: Zuckerberg Commits Billions to Meta’s Superintelligence Data Center Expansion

ICEX Forum 2025 Opens: FySelf’s TwinH Showcases AI Innovation

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2025 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.