Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

OpenClaw integrates VirusTotal scanning to detect malicious ClawHub skills

Kids ‘picked last in gym class’ prepare for Super Bowl

NBA star Giannis Antetokounmpo joins Calci as an investor

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » Why Cohere’s former head of AI research is betting on expansion
Startups

Why Cohere’s former head of AI research is betting on expansion

userBy userOctober 22, 2025No Comments6 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

AI labs are competing to build data centers the size of Manhattan, each costing billions of dollars and using as much energy as a small city. This effort is driven by a deep belief in scaling. The idea is that by adding more computing power to existing AI training methods, we will eventually create superintelligent systems that can perform all kinds of tasks.

But AI researchers are increasingly saying that scaling large language models may be reaching its limits, and that other breakthroughs may be needed to improve AI performance.

That’s what Sara Hooker, former vice president of AI research at Cohere and Google Brain alumnus, is betting with her new startup Adaption Labs. She co-founded the company with Cohere and Google veteran Sudip Roy. The company is based on the idea that scaling LLM has become an inefficient way to squeeze more performance out of AI models. Hooker, who left Cohere in August, quietly announced the startup this month to begin hiring more broadly.

I’m starting a new project.

I’m working on what I consider to be the most important problem: building thinking machines that adapt and continuously learn.

We have an incredibly talented founding team and are hiring engineering, operations, and design talent.

Join us: https://t.co/eKlfWAfuRy

— Sarah Hooker (@sarahookr) October 7, 2025

In an interview with TechCrunch, Hooker said Adaption Labs is building an AI system that can continuously adapt and learn from real-world experience, and do so extremely efficiently. She declined to share details about the methodology behind this approach or whether the company relies on LLM or another architecture.

“We’re at a tipping point now where it’s clear that the way we just scale these models, the attractive but very boring scaling pill approach, is not producing intelligence that can navigate and interact with the world,” Hooker said.

According to Hooker, adapting is “at the heart of learning.” For example, I walked past the dining room table and stubbed my toe. That way, you can walk around the table more carefully next time. AI Lab sought to capture this idea through reinforcement learning (RL), which allows AI models to learn from mistakes in a controlled setting. However, today’s RL techniques do not help production AI models (i.e., systems already used by customers) to learn from mistakes in real time. They just keep tapping their toes.

Some AI labs offer consulting services that allow companies to fine-tune AI models to suit their custom needs, but that comes at a price. OpenAI is reportedly asking customers to spend more than $10 million to provide tweaking consulting services.

tech crunch event

san francisco
|
October 27-29, 2025

“We have some frontier labs that determine a set of AI models that are available to everyone the same way, but are very expensive to adapt,” Hooker said. “And in fact, I don’t think that needs to be true anymore. AI systems can learn very efficiently from their environment. Proving that will completely change the dynamics of who controls and shapes AI and, in fact, who these models ultimately serve.”

Adaption Labs is the latest sign that the industry’s confidence in scaling LLMs is wavering. A recent paper by MIT researchers found that the world’s largest AI model may soon show diminishing returns. The atmosphere in San Francisco seems to be changing. Dwarkesh Patel, a popular podcaster in the AI ​​world, recently hosted an unusually skeptical conversation with a well-known AI researcher.

Turing Award winner Richard Sutton, considered the “father of RL,” told Patel in September that LLMs cannot be truly scalable because they don’t learn from real-world experience. This month, Andrej Karpathy, an early employee at OpenAI, told Patel he had concerns about RL’s long-term potential to improve AI models.

This kind of fear is not unprecedented. In late 2024, some AI researchers raised concerns that scaling AI models through pre-training (where the AI ​​model learns patterns from large datasets) was leading to diminishing returns. Until then, pre-training was the secret sauce for OpenAI and Google to improve their models.

While these pre-training scaling concerns are now showing up in the data, the AI ​​industry has found another way to improve models. In 2025, breakthrough advances in AI inference models will further advance the capabilities of AI models by requiring additional time and computational resources to solve problems before they can be answered.

AI Lab seems to believe that scaling up RL and AI inference models is the new frontier. OpenAI researchers previously told TechCrunch that they developed their first AI inference model, o1, because they believed it could be scaled up well. Researchers at Meta and Periodic Labs recently published a paper looking at how RL can further enhance performance. The study reportedly cost more than $4 million, highlighting how expensive current approaches are.

In contrast, Adaption Labs aims to find the next breakthrough and prove that learning from experience is much cheaper. The company was in talks to raise $20 million to $40 million in a seed round earlier this fall, according to three investors who viewed the company’s proposal materials. The round has since closed, but the final amount is unknown. Hooker declined to comment.

Asked about investors, Hooker said: “We’re going to be very ambitious.”

Hooker previously led Cohere Labs, where he trained small-scale AI models for enterprise use cases. Today, compact AI systems regularly outperform larger AI systems on coding, math, and reasoning benchmarks. Hooker hopes to continue this trend.

She also built a reputation for expanding access to AI research globally by hiring research talent from underrepresented regions such as Africa. Adaption Labs plans to open a San Francisco office soon, but Hooker said it plans to hire around the world.

If Hooker and Adaptation Lab are right about the limits of scaling, the impact could be significant. Billions of dollars have already been invested in scaling LLM with the assumption that larger models will lead to general intelligence. But true adaptive learning may prove not only more powerful, but also much more efficient.

Marina Temkin contributed reporting.


Source link

#Aceleradoras #CapitalRiesgo #EcosistemaStartup #Emprendimiento #InnovaciónEmpresarial #Startups
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleOpenAI requests memorial attendee list in ChatGPT suicide lawsuit
Next Article Scientists develop super-powerful, squishy robot eyes that focus automatically and require no power source
user
  • Website

Related Posts

Kids ‘picked last in gym class’ prepare for Super Bowl

February 8, 2026

NBA star Giannis Antetokounmpo joins Calci as an investor

February 7, 2026

New York state lawmaker proposes three-year moratorium on new data centers

February 7, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

OpenClaw integrates VirusTotal scanning to detect malicious ClawHub skills

Kids ‘picked last in gym class’ prepare for Super Bowl

NBA star Giannis Antetokounmpo joins Calci as an investor

New York state lawmaker proposes three-year moratorium on new data centers

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Castilla-La Mancha Ignites Innovation: fiveclmsummit Redefines Tech Future

Local Power, Health Innovation: Alcolea de Calatrava Boosts FiveCLM PoC with Community Engagement

The Future of Digital Twins in Healthcare: From Virtual Replicas to Personalized Medical Models

Human Digital Twins: The Next Tech Frontier Set to Transform Healthcare and Beyond

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2026 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.