Close Menu
  • Academy
  • Events
  • Identity
  • International
  • Inventions
  • Startups
    • Sustainability
  • Tech
  • Español
    • Português
What's Hot

Music mogul “Diddy” faces allegations of abuse on the first day of the US | Court News

House Republicans are proposing $5 billion for private school vouchers

Heavy gunfire, clash in Tripoli, Libya after killing militia leader | United Nations News

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Academy
  • Events
  • Identity
  • International
  • Inventions
  • Startups
    • Sustainability
  • Tech
  • Español
    • Português
Fyself News
Home » AI leaders bring AGI debate down to the planet
Startups

AI leaders bring AGI debate down to the planet

userBy userMarch 19, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

During a recent dinner with a San Francisco business leader, I made a chilled comment in my room. I hadn’t asked my meal buddies anything I thought was very fake. It’s simply whether they thought today’s AI could one day achieve human-like intelligence (i.e. AGI).

It’s a more controversial topic than you think.

In 2025, there is no shortage of high-tech CEOs offering a bull case for large-scale language models (LLMS) that allow power chatbots like ChatGpt and Gemini to achieve human-level or superhuman intelligence for almost a period of time. These executives argue that highly capable AI results in a wide and wide distribution of social benefits.

For example, Anthropic CEO Dario Amodei wrote in his essay that a very powerful AI would soon arrive in 2026 and “wielder than Nobel laureates in the most relevant fields.” Meanwhile, Openai CEO Sam Altman recently claimed that his company knows how to build “super intelligent” AI, predicting it could “severely accelerate scientific discoveries.”

However, not everyone is convincing with these optimistic claims.

Other AI leaders are skeptical that today’s LLM can reach AGI (a much less tension), apart from a few new innovations. These leaders have kept things historically unremarkable, but have recently begun to speak out more.

In this month’s work, Face co-founder and chief science officer Thomas Wolf called part of Amodei’s vision “hopeful thinking at best.” Notified by doctoral studies in statistical and quantum physics, Wolf believes that Nobel Prize-level breakthroughs come from asking questions that don’t come from answering known questions (the fact that AI is superior), but rather asking questions that no one would think would ask.

In Wolf’s opinion, today’s LLM is not left to the task.

“I’d like to see this ‘Einstein model’ but I need to dive into the details of how to get there,” Wolf told TechCrunch in an interview. “That’s starting to get interesting.”

Wolf said he wrote this because he felt there was too much hype about AGI and that there was not enough serious evaluation of how to actually get there. He believes that as things stand, AI has real potential to transform the world in the near future, but does not achieve human-level intelligence or urgency.

Much of the AI ​​world is obsessed with AGI’s promises. Those who don’t believe that it is possible are often labelled “anti-technical.”

Some may peg Wolf as a pessimist of this view, but Wolf considers himself an “informed optimist.” Certainly, he is not just an AI leader with conservative predictions about technology.

Demis Hassabis, CEO of Google Deepmind, reportedly told staff that in his opinion the industry could be more than a decade away from the development of AGI. Metachief AI scientist Yann Lecun has also expressed doubts about the potential of LLMS. Speaking at NVIDIA GTC on Tuesday, Lecun said the idea that LLMS could achieve AGI was “nonsense” and called for an entirely new architecture to function as a close bedrock.

Former Openai Chief Investigator Kenneth Stanley is one of those digging into details of how to build advanced AI with today’s models. He is now an executive at Lila Sciences and a new startup that has raised $200 million in venture capital to unlock scientific innovation through automated labs.

Stanley spends his days trying to extract original creative ideas from AI models, a subfield of AI research called Open-Edendness. Lila Sciences aims to create AI models that can automate the entire science process, including the first step. You will arrive at very good questions and hypotheses that will ultimately lead to breakthroughs.

“I wish I had written it [Wolf’s] The essay is because it really reflects my feelings,” Stanley said in an interview with TechCrunch [he] What I’ve noticed is that being very knowledgeable and skilled doesn’t necessarily lead to having truly original ideas. ”

Stanley believes creativity is an important step along the path to AGI, but he points out that building a “creative” AI model is easier than ever.

Optimists like Amodei have pointed out methods such as “inference” models of AI. The AI ​​”inference” model uses more computing power and answers consistently correctly to specific questions as evidence that AGI is not too far away. But coming up with the original idea or question may require a different kind of intelligence, Stanley says.

“If you think about it, the reasoning is almost relative [creativity]He added, “The goal in question should go directly towards that goal.” It basically stops you from looking at anything other than that goal.

To design truly intelligent AI models, Stanley proposes that it is necessary to algorithmically replicate human subjective tastes to promise new ideas. Today’s AI models work very well in academic domains with clear answers such as mathematics and programming. However, Stanley points out that it is much more difficult to design AI models for more subjective tasks that require creativity, where there is no necessarily a “correct” answer.

“People are far away [subjectivity] In science, words are almost toxic,” Stanley said. [algorithmically]. It’s just a part of the data stream. ”

Stanley is pleased that the open-ended field is gaining more attention as dedicated labs at Lila Sciences, Google Deepmind and AI startup Sakana are currently working on the issue. He says he’s starting to see more people talking about AI creativity, but he thinks there’s a lot more to do.

Wolf and Lecan probably agree. If you do, call them AI realists: AI leaders are approaching closely with serious and grounded questions about AGI and its feasibility. Their goal is not to get tired of progress in the AI ​​field. Rather, what stands between today’s AI models and AGI, and what stands between them, and super intelligence – and chase those blockers.


Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleHow much money will actually be lost in scams and will it be wasted in the US? |Political News
Next Article SEC drops appeal in Ripple lawsuit: What does it mean for cryptography?
user
  • Website

Related Posts

Slate Auto exceeds 100,000 refundable bookings in 2 weeks

May 12, 2025

Google I/O 2025: How All AI and Android Reveal

May 12, 2025

Even the A16Z VC says no one really knows what an AI agent is

May 12, 2025
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

Music mogul “Diddy” faces allegations of abuse on the first day of the US | Court News

House Republicans are proposing $5 billion for private school vouchers

Heavy gunfire, clash in Tripoli, Libya after killing militia leader | United Nations News

Google launches the AI ​​Futures Fund and invests in the next wave of AI startups

Trending Posts

Music mogul “Diddy” faces allegations of abuse on the first day of the US | Court News

May 12, 2025

Heavy gunfire, clash in Tripoli, Libya after killing militia leader | United Nations News

May 12, 2025

Trump signs executive order to lower prescription drug prices | Donald Trump News

May 12, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Google launches the AI ​​Futures Fund and invests in the next wave of AI startups

AB DAO and AB Charity Foundation work together to build trustworthy infrastructure and promote global philanthropy

Top tech startup funding news for today, May 12, 2025

Israeli startup Classiq raises $110 million to become “microsoft in quantum computing.”

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2025 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.