Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

Review Week: X CEO Linda Yaccarino stepping on

New Rowhammer Attack Variant Degrades AI Models on Nvidia GPUs

Xai and Grok apologise for “terrifying behaviour”

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » Human CEOs argue that AI models are less hallucinating than humans
Startups

Human CEOs argue that AI models are less hallucinating than humans

userBy userMay 22, 2025No Comments3 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

Mankind CEO Dario Amodei said today’s AI models hallucinate or create things and present things at a lower speed than humans, he said Thursday in the code for Code with Claude, the first developer event in humanity.

Amodei said it was in the middle of a bigger point he was going. The hallucination of AI means that humanity’s path to AGI, not a limitation of AI systems beyond human intelligence.

“It really depends on how you measure it, but I think AI models probably hallucinate less than humans, but in a more surprising way,” Amodei answered TechCrunch’s question.

Anthropic’s CEO is one of the industry’s most bullish leaders in the possibility that AI models will achieve AGI. In a widely circulating paper he wrote last year, Amodei said he believes Agi will arrive in 2026. During a press conference on Thursday, the CEO of humanity has made steady progress in its purpose, saying “water is rising everywhere.”

“Everyone is always looking for these hard blocks [AI] I can do what I can,” Amodei said. “They can’t be seen anywhere. That’s not what they are.”

Other AI leaders believe that hallucinations present a major obstacle to achieving AGI. Earlier this week, Google Deepmind CEO Demis Hassabis said today’s AI models have too many “holes” and there are too many obvious questions. For example, earlier this month, a lawyer representing humanity was forced to apologise in court after using Claude to create a quote in a court application, and the AI ​​chatbot hallucinated and had a wrong name and title.

It is difficult to test Amodei’s claims, primarily because most hallucination benchmarks model each other’s models. They do not compare models to humans. Certain techniques seem to help reduce hallucination rates, such as providing access to web searches to AI models. Separately, some AI models, such as Openai’s GPT-4.5, have significantly lower benchmark hallucination rates compared to early generation systems.

However, there is also evidence to suggest that hallucinations are indeed worsening in advanced inference AI models. Openai’s O3 and O4-MINI models have higher hallucination rates than Openai’s previous generation reasoning models, and the company doesn’t really understand why.

Later in the press conference, Amodei noted that television stations, politicians and humans of all kinds of occupations are constantly making mistakes. The fact that Ai makes a mistake is not about knocking on his intelligence, according to Amodei. However, Anthropic’s CEO acknowledged the confidence that AI models can be problematic to maintain facts as facts.

In fact, humanity has done a considerable amount of research into the tendency of AI models to deceive humans. This is an issue that appears to be particularly common with the company’s recently launched Claude Opus 4. Considering early access to test AI models, the Safety Institute found that early versions of Claude Opus 4 exhibit a high tendency towards humans and tend to deceive. Apollo went until it suggested that humanity should not release its early models. Humanity said it came up with some mitigation that appears to address the issues raised by Apollo.

Amodei’s comments suggest that humans may think that the AI ​​model is AGI, or that even if it is still hallucinated, it is equivalent to human-level intelligence. However, there are AIs whose hallucinations do not reach AGI by many definitions.


Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleSuspects charged with murder in shooting two Israeli embassy workers | Court News
Next Article Georgetown scholars recall the “die process ock ha ha” of immigration prisons
user
  • Website

Related Posts

Review Week: X CEO Linda Yaccarino stepping on

July 12, 2025

Xai and Grok apologise for “terrifying behaviour”

July 12, 2025

Sequoia bets on silence | TechCrunch

July 12, 2025
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

Review Week: X CEO Linda Yaccarino stepping on

New Rowhammer Attack Variant Degrades AI Models on Nvidia GPUs

Xai and Grok apologise for “terrifying behaviour”

Over 600 laravel apps exposed to remote code execution due to app_keys leaked on github

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

ICEX Forum 2025 Opens: FySelf’s TwinH Showcases AI Innovation

The Future of Process Automation is Here: Meet TwinH

Robots Play Football in Beijing: A Glimpse into China’s Ambitious AI Future

TwinH: A New Frontier in the Pursuit of Immortality?

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2025 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.