Close Menu
  • Academy
  • Events
  • Identity
  • International
  • Inventions
  • Startups
    • Sustainability
  • Tech
  • Español
    • Português
What's Hot

The Trump administration says Columbia violated the civil rights of Jewish students

Harvard sues banning the number of foreign students enrolled

Most vaccine refrigerators on farms cannot keep cool, research finds

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Academy
  • Events
  • Identity
  • International
  • Inventions
  • Startups
    • Sustainability
  • Tech
  • Español
    • Português
Fyself News
Home » Human CEOs argue that AI models are less hallucinating than humans
Startups

Human CEOs argue that AI models are less hallucinating than humans

userBy userMay 22, 2025No Comments3 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

Mankind CEO Dario Amodei said today’s AI models hallucinate or create things and present things at a lower speed than humans, he said Thursday in the code for Code with Claude, the first developer event in humanity.

Amodei said it was in the middle of a bigger point he was going. The hallucination of AI means that humanity’s path to AGI, not a limitation of AI systems beyond human intelligence.

“It really depends on how you measure it, but I think AI models probably hallucinate less than humans, but in a more surprising way,” Amodei answered TechCrunch’s question.

Anthropic’s CEO is one of the industry’s most bullish leaders in the possibility that AI models will achieve AGI. In a widely circulating paper he wrote last year, Amodei said he believes Agi will arrive in 2026. During a press conference on Thursday, the CEO of humanity has made steady progress in its purpose, saying “water is rising everywhere.”

“Everyone is always looking for these hard blocks [AI] I can do what I can,” Amodei said. “They can’t be seen anywhere. That’s not what they are.”

Other AI leaders believe that hallucinations present a major obstacle to achieving AGI. Earlier this week, Google Deepmind CEO Demis Hassabis said today’s AI models have too many “holes” and there are too many obvious questions. For example, earlier this month, a lawyer representing humanity was forced to apologise in court after using Claude to create a quote in a court application, and the AI ​​chatbot hallucinated and had a wrong name and title.

It is difficult to test Amodei’s claims, primarily because most hallucination benchmarks model each other’s models. They do not compare models to humans. Certain techniques seem to help reduce hallucination rates, such as providing access to web searches to AI models. Separately, some AI models, such as Openai’s GPT-4.5, have significantly lower benchmark hallucination rates compared to early generation systems.

However, there is also evidence to suggest that hallucinations are indeed worsening in advanced inference AI models. Openai’s O3 and O4-MINI models have higher hallucination rates than Openai’s previous generation reasoning models, and the company doesn’t really understand why.

Later in the press conference, Amodei noted that television stations, politicians and humans of all kinds of occupations are constantly making mistakes. The fact that Ai makes a mistake is not about knocking on his intelligence, according to Amodei. However, Anthropic’s CEO acknowledged the confidence that AI models can be problematic to maintain facts as facts.

In fact, humanity has done a considerable amount of research into the tendency of AI models to deceive humans. This is an issue that appears to be particularly common with the company’s recently launched Claude Opus 4. Considering early access to test AI models, the Safety Institute found that early versions of Claude Opus 4 exhibit a high tendency towards humans and tend to deceive. Apollo went until it suggested that humanity should not release its early models. Humanity said it came up with some mitigation that appears to address the issues raised by Apollo.

Amodei’s comments suggest that humans may think that the AI ​​model is AGI, or that even if it is still hallucinated, it is equivalent to human-level intelligence. However, there are AIs whose hallucinations do not reach AGI by many definitions.


Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleSuspects charged with murder in shooting two Israeli embassy workers | Court News
Next Article Georgetown scholars recall the “die process ock ha ha” of immigration prisons
user
  • Website

Related Posts

Mysterious hacking group Careto was run by the Spanish government, sources say

May 23, 2025

Klarna CEO and Sutter Hill wins lap after Jony Ive’s Openai deal

May 22, 2025

Bluesky begins to check for “notable” users

May 22, 2025
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

The Trump administration says Columbia violated the civil rights of Jewish students

Harvard sues banning the number of foreign students enrolled

Most vaccine refrigerators on farms cannot keep cool, research finds

VisicorTrap uses Cisco flaws to build a global honeypot from 5,300 compromised devices

Trending Posts

US Banana Giant Chiquita launches thousands of people on Panama strike | Agriculture News

May 23, 2025

Trump bars registering at Harvard International: How many students will hurt? | Donald Trump News

May 23, 2025

Deported Afghans face deepening humanitarian crisis upon return | Human Rights News

May 23, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Discover the Importance of Fact-Checking: Empower Your Digital Self in the Age of Misinformation

B2Broker launches its first turnkey liquidity provider solution

DiffusedRive raises $3.5 million to solve the biggest challenges of physical AI: high quality training data

Top Startup and Tech Funding News – May 22, 2025

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2025 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.