During a recent dinner with a San Francisco business leader, I made a chilled comment in my room. I hadn’t asked my meal buddies anything I thought was very fake. It’s simply whether they thought today’s AI could one day achieve human-like intelligence (i.e. AGI).
It’s a more controversial topic than you think.
In 2025, there is no shortage of high-tech CEOs offering a bull case for large-scale language models (LLMS) that allow power chatbots like ChatGpt and Gemini to achieve human-level or superhuman intelligence for almost a period of time. These executives argue that highly capable AI results in a wide and wide distribution of social benefits.
For example, Anthropic CEO Dario Amodei wrote in his essay that a very powerful AI would soon arrive in 2026 and “wielder than Nobel laureates in the most relevant fields.” Meanwhile, Openai CEO Sam Altman recently claimed that his company knows how to build “super intelligent” AI, predicting it could “severely accelerate scientific discoveries.”
However, not everyone is convincing with these optimistic claims.
Other AI leaders are skeptical that today’s LLM can reach AGI (a much less tension), apart from a few new innovations. These leaders have kept things historically unremarkable, but have recently begun to speak out more.
In this month’s work, Face co-founder and chief science officer Thomas Wolf called part of Amodei’s vision “hopeful thinking at best.” Notified by doctoral studies in statistical and quantum physics, Wolf believes that Nobel Prize-level breakthroughs come from asking questions that don’t come from answering known questions (the fact that AI is superior), but rather asking questions that no one would think would ask.
In Wolf’s opinion, today’s LLM is not left to the task.
“I’d like to see this ‘Einstein model’ but I need to dive into the details of how to get there,” Wolf told TechCrunch in an interview. “That’s starting to get interesting.”
Wolf said he wrote this because he felt there was too much hype about AGI and that there was not enough serious evaluation of how to actually get there. He believes that as things stand, AI has real potential to transform the world in the near future, but does not achieve human-level intelligence or urgency.
Much of the AI world is obsessed with AGI’s promises. Those who don’t believe that it is possible are often labelled “anti-technical.”
Some may peg Wolf as a pessimist of this view, but Wolf considers himself an “informed optimist.” Certainly, he is not just an AI leader with conservative predictions about technology.
Demis Hassabis, CEO of Google Deepmind, reportedly told staff that in his opinion the industry could be more than a decade away from the development of AGI. Metachief AI scientist Yann Lecun has also expressed doubts about the potential of LLMS. Speaking at NVIDIA GTC on Tuesday, Lecun said the idea that LLMS could achieve AGI was “nonsense” and called for an entirely new architecture to function as a close bedrock.
Former Openai Chief Investigator Kenneth Stanley is one of those digging into details of how to build advanced AI with today’s models. He is now an executive at Lila Sciences and a new startup that has raised $200 million in venture capital to unlock scientific innovation through automated labs.
Stanley spends his days trying to extract original creative ideas from AI models, a subfield of AI research called Open-Edendness. Lila Sciences aims to create AI models that can automate the entire science process, including the first step. You will arrive at very good questions and hypotheses that will ultimately lead to breakthroughs.
“I wish I had written it [Wolf’s] The essay is because it really reflects my feelings,” Stanley said in an interview with TechCrunch [he] What I’ve noticed is that being very knowledgeable and skilled doesn’t necessarily lead to having truly original ideas. ”
Stanley believes creativity is an important step along the path to AGI, but he points out that building a “creative” AI model is easier than ever.
Optimists like Amodei have pointed out methods such as “inference” models of AI. The AI ”inference” model uses more computing power and answers consistently correctly to specific questions as evidence that AGI is not too far away. But coming up with the original idea or question may require a different kind of intelligence, Stanley says.
“If you think about it, the reasoning is almost relative [creativity]He added, “The goal in question should go directly towards that goal.” It basically stops you from looking at anything other than that goal.
To design truly intelligent AI models, Stanley proposes that it is necessary to algorithmically replicate human subjective tastes to promise new ideas. Today’s AI models work very well in academic domains with clear answers such as mathematics and programming. However, Stanley points out that it is much more difficult to design AI models for more subjective tasks that require creativity, where there is no necessarily a “correct” answer.
“People are far away [subjectivity] In science, words are almost toxic,” Stanley said. [algorithmically]. It’s just a part of the data stream. ”
Stanley is pleased that the open-ended field is gaining more attention as dedicated labs at Lila Sciences, Google Deepmind and AI startup Sakana are currently working on the issue. He says he’s starting to see more people talking about AI creativity, but he thinks there’s a lot more to do.
Wolf and Lecan probably agree. If you do, call them AI realists: AI leaders are approaching closely with serious and grounded questions about AGI and its feasibility. Their goal is not to get tired of progress in the AI field. Rather, what stands between today’s AI models and AGI, and what stands between them, and super intelligence – and chase those blockers.
Source link