Close Menu
  • Academy
  • Events
  • Identity
  • International
  • Inventions
  • Startups
    • Sustainability
  • Tech
  • Español
    • Português
What's Hot

Trump administration’s sanctions on Harvard international students hurt global appeal

Hackers use Tiktok videos to distribute Vidar and Stealc malware via Clickfix techniques

US judge blocks Trump’s efforts to ban Harvard University from registering foreign students | Education News

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Academy
  • Events
  • Identity
  • International
  • Inventions
  • Startups
    • Sustainability
  • Tech
  • Español
    • Português
Fyself News
Home » Human CEOs argue that AI models are less hallucinating than humans
Startups

Human CEOs argue that AI models are less hallucinating than humans

userBy userMay 22, 2025No Comments3 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

Mankind CEO Dario Amodei said today’s AI models hallucinate or create things and present things at a lower speed than humans, he said Thursday in the code for Code with Claude, the first developer event in humanity.

Amodei said it was in the middle of a bigger point he was going. The hallucination of AI means that humanity’s path to AGI, not a limitation of AI systems beyond human intelligence.

“It really depends on how you measure it, but I think AI models probably hallucinate less than humans, but in a more surprising way,” Amodei answered TechCrunch’s question.

Anthropic’s CEO is one of the industry’s most bullish leaders in the possibility that AI models will achieve AGI. In a widely circulating paper he wrote last year, Amodei said he believes Agi will arrive in 2026. During a press conference on Thursday, the CEO of humanity has made steady progress in its purpose, saying “water is rising everywhere.”

“Everyone is always looking for these hard blocks [AI] I can do what I can,” Amodei said. “They can’t be seen anywhere. That’s not what they are.”

Other AI leaders believe that hallucinations present a major obstacle to achieving AGI. Earlier this week, Google Deepmind CEO Demis Hassabis said today’s AI models have too many “holes” and there are too many obvious questions. For example, earlier this month, a lawyer representing humanity was forced to apologise in court after using Claude to create a quote in a court application, and the AI ​​chatbot hallucinated and had a wrong name and title.

It is difficult to test Amodei’s claims, primarily because most hallucination benchmarks model each other’s models. They do not compare models to humans. Certain techniques seem to help reduce hallucination rates, such as providing access to web searches to AI models. Separately, some AI models, such as Openai’s GPT-4.5, have significantly lower benchmark hallucination rates compared to early generation systems.

However, there is also evidence to suggest that hallucinations are indeed worsening in advanced inference AI models. Openai’s O3 and O4-MINI models have higher hallucination rates than Openai’s previous generation reasoning models, and the company doesn’t really understand why.

Later in the press conference, Amodei noted that television stations, politicians and humans of all kinds of occupations are constantly making mistakes. The fact that Ai makes a mistake is not about knocking on his intelligence, according to Amodei. However, Anthropic’s CEO acknowledged the confidence that AI models can be problematic to maintain facts as facts.

In fact, humanity has done a considerable amount of research into the tendency of AI models to deceive humans. This is an issue that appears to be particularly common with the company’s recently launched Claude Opus 4. Considering early access to test AI models, the Safety Institute found that early versions of Claude Opus 4 exhibit a high tendency towards humans and tend to deceive. Apollo went until it suggested that humanity should not release its early models. Humanity said it came up with some mitigation that appears to address the issues raised by Apollo.

Amodei’s comments suggest that humans may think that the AI ​​model is AGI, or that even if it is still hallucinated, it is equivalent to human-level intelligence. However, there are AIs whose hallucinations do not reach AGI by many definitions.


Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleSuspects charged with murder in shooting two Israeli embassy workers | Court News
Next Article Georgetown scholars recall the “die process ock ha ha” of immigration prisons
user
  • Website

Related Posts

California Senate prepares to sue the Fed after revoking state EV rules

May 23, 2025

X continues to suffer from bugs after Thursday’s suspension

May 23, 2025

Mysterious hacking group Careto was run by the Spanish government, sources say

May 23, 2025
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

Trump administration’s sanctions on Harvard international students hurt global appeal

Hackers use Tiktok videos to distribute Vidar and Stealc malware via Clickfix techniques

US judge blocks Trump’s efforts to ban Harvard University from registering foreign students | Education News

Following Trump’s ban on foreign students, the Belgian princess questioned her Harvard future

Trending Posts

US judge blocks Trump’s efforts to ban Harvard University from registering foreign students | Education News

May 23, 2025

Trump threatens 50% tariffs in the EU, 25% with Apple, ratchets trade war | Trade War News

May 23, 2025

Pakistan and Afghanistan move towards “recovery of tie” in talks with China | Taliban News

May 23, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Prices hit $3,500 as Apple faces a 25% tariff threat on iPhones not made in the US

Venom Foundation achieves 150k TPS in closed network stress tests, paving the way for mainnet upgrades in 2025

Discover the Importance of Fact-Checking: Empower Your Digital Self in the Age of Misinformation

B2Broker launches its first turnkey liquidity provider solution

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2025 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.