Close Menu
  • Academy
  • Events
  • Identity
  • International
  • Inventions
  • Startups
    • Sustainability
  • Tech
  • Spanish
What's Hot

How to watch Apple’s WWDC 2025 Keynote

In WWDC 25, AI must compensate with developers after AI shortage and lawsuits

New supply chain malware operations hit the NPM and PYPI ecosystems, targeting millions around the world

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Academy
  • Events
  • Identity
  • International
  • Inventions
  • Startups
    • Sustainability
  • Tech
  • Spanish
Fyself News
Home » Openai’s new inference AI model shows even more hallucinations
Startups

Openai’s new inference AI model shows even more hallucinations

userBy userApril 18, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

Openai’s recent launch of the O3 and O4-MINI AI models are cutting edge in many ways. However, the new model is still hallucinating or compensates for things. In fact, it prevents more hallucinations than some of the older models of Openai.

Hallucinations have proven to be one of the biggest and most difficult problems to solve with AI, and have also impacted today’s best performance systems. Historically, each new model has slightly improved in the hallucination sector, with fewer hallucinations than its predecessors. However, that doesn’t seem to be the case for O3 and O4-mini.

According to Openai’s internal tests, the so-called inference models, O3 and O4-Mini hallucinates more frequently than the company’s previous inference models (O1, O1-Mini, and O3-Mini), as well as Openai’s traditional “non-seasonal” models, such as GPT-4O.

Perhaps more concerning, ChatGpt makers really don’t know why it’s happening.

In the technical report of O3 and O4-Mini, Openai writes that “more research is needed” to understand why hallucinations are getting worse as they expand their inference models. O3 and O4-MINI improve performance in some areas, including coding and mathematics-related tasks. However, they often make “more accurate and more inaccurate/hastique claims” because they “make more claims overall,” according to the report.

Openai found that O3 hallucinated in response to 33% of questions about Personqa, the company’s internal benchmark for measuring the accuracy of model knowledge about people. This is almost twice the hallucination rates of Openai’s previous inference models, O1 and O3-Mini, which achieved 16% and 14.8%, respectively. The O4-Mini got even worse with Personqa.

Third-party testing by Transluse, a non-profit AI Research Lab, also found evidence that it tends to constitute the actions O3 took in the process of reaching the answer. In one example, I translated the observed O3, claiming I ran the code in “other than ChatGPT” on my 2021 MacBook Pro, and copied the numbers into my answer. O3 has access to several tools, but that is not possible.

“Our hypothesis is that the type of reinforcement learning used in the O-series model can amplify problems that are normally mitigated (not completely erased) by standard post-training pipelines,” said Neil Chowdhury, a researcher and former Openai employee, in an email to TechCrunch.

Transluse co-founder Sarah Schwettmann added that the hallucination rate of O3 is not very useful otherwise.

Kian Katanforoosh, adjunct professor and CEO of Stanford University, is CEO of luxury startup Workara, who told TechCrunch that his team has already tested O3 in their coding workflows and has found that they are beyond the top of their competitors. However, Katanforoosh says that O3 tends to hallucinate broken website links. The model provides links that do not work when you click.

Hallucinations may help models arrive at interesting ideas and become creative in “thinking,” but some models can also be tough sellers for companies in the market where accuracy is paramount. For example, a law firm may not be satisfied with a model that inserts many de facto errors into client contracts.

One promising approach to increasing model accuracy is to provide web search capabilities. Openai’s GPT-4o using web search achieves 90% accuracy with SimpleQA. This is another Openai accuracy benchmark. Potentially, searches could improve hallucination rates in inference models, at least when users are willing to expose their prompts to third-party search providers.

Scaling up the inference model makes the hunt for solutions even more urgent as the hallucinations actually continue to worsen.

“Countering hallucinations in all models is an ongoing field of research, and we are constantly working to improve its accuracy and reliability,” said Openai spokesman Niko Felix in an email to TechCrunch.

Last year, the broader AI industry began to show a decline in returns, focusing on inference models after techniques to improve traditional AI models. Inference improves model performance for various tasks without the need for enormous amounts of computing and data during training. However, it also appears that reasoning can lead to more hallucinations – presenting a challenge.


Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleHas Xi Jinping succeeded in winning support for Trump’s tariff war? | TV show news
Next Article Silo V2 lending reduces risk and increases Sonic’s reward
user
  • Website

Related Posts

How to watch Apple’s WWDC 2025 Keynote

June 8, 2025

In WWDC 25, AI must compensate with developers after AI shortage and lawsuits

June 8, 2025

Lawyers could face “severe” penalties for quotes generated by fake AI, UK courts warn

June 7, 2025
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

How to watch Apple’s WWDC 2025 Keynote

In WWDC 25, AI must compensate with developers after AI shortage and lawsuits

New supply chain malware operations hit the NPM and PYPI ecosystems, targeting millions around the world

Malicious browser extensions will infect 722 users across Latin America since early 2025

Trending Posts

Sana Yousaf, who was the Pakistani Tiktok star shot by gunmen? |Crime News

June 4, 2025

Trump says it’s difficult to make a deal with China’s xi’ amid trade disputes | Donald Trump News

June 4, 2025

Iraq’s Jewish Community Saves Forgotten Shrine Religious News

June 4, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Should the government ban AI-generated humans to stop the collapse of social trust?

AB will be released at Binance -Tech Startups

Top 10 Startups and Tech Funding News for the Weekly Ends June 6, 2025

Order openai to keep all chatgpt logs including deleted temporary chats, API requests

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2025 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.