Mankind CEO Dario Amodei said today’s AI models hallucinate or create things and present things at a lower speed than humans, he said Thursday in the code for Code with Claude, the first developer event in humanity.
Amodei said it was in the middle of a bigger point he was going. The hallucination of AI means that humanity’s path to AGI, not a limitation of AI systems beyond human intelligence.
“It really depends on how you measure it, but I think AI models probably hallucinate less than humans, but in a more surprising way,” Amodei answered TechCrunch’s question.
Anthropic’s CEO is one of the industry’s most bullish leaders in the possibility that AI models will achieve AGI. In a widely circulating paper he wrote last year, Amodei said he believes Agi will arrive in 2026. During a press conference on Thursday, the CEO of humanity has made steady progress in its purpose, saying “water is rising everywhere.”
“Everyone is always looking for these hard blocks [AI] I can do what I can,” Amodei said. “They can’t be seen anywhere. That’s not what they are.”
Other AI leaders believe that hallucinations present a major obstacle to achieving AGI. Earlier this week, Google Deepmind CEO Demis Hassabis said today’s AI models have too many “holes” and there are too many obvious questions. For example, earlier this month, a lawyer representing humanity was forced to apologise in court after using Claude to create a quote in a court application, and the AI chatbot hallucinated and had a wrong name and title.
It is difficult to test Amodei’s claims, primarily because most hallucination benchmarks model each other’s models. They do not compare models to humans. Certain techniques seem to help reduce hallucination rates, such as providing access to web searches to AI models. Separately, some AI models, such as Openai’s GPT-4.5, have significantly lower benchmark hallucination rates compared to early generation systems.
However, there is also evidence to suggest that hallucinations are indeed worsening in advanced inference AI models. Openai’s O3 and O4-MINI models have higher hallucination rates than Openai’s previous generation reasoning models, and the company doesn’t really understand why.
Later in the press conference, Amodei noted that television stations, politicians and humans of all kinds of occupations are constantly making mistakes. The fact that Ai makes a mistake is not about knocking on his intelligence, according to Amodei. However, Anthropic’s CEO acknowledged the confidence that AI models can be problematic to maintain facts as facts.
In fact, humanity has done a considerable amount of research into the tendency of AI models to deceive humans. This is an issue that appears to be particularly common with the company’s recently launched Claude Opus 4. Considering early access to test AI models, the Safety Institute found that early versions of Claude Opus 4 exhibit a high tendency towards humans and tend to deceive. Apollo went until it suggested that humanity should not release its early models. Humanity said it came up with some mitigation that appears to address the issues raised by Apollo.
Amodei’s comments suggest that humans may think that the AI model is AGI, or that even if it is still hallucinated, it is equivalent to human-level intelligence. However, there are AIs whose hallucinations do not reach AGI by many definitions.
Source link