Shortly after the AI Action Summit in Paris ended, humanity co-founder and CEO Dario Amodai called the event a “missed opportunity.” He added in a statement released Tuesday that ” Given the pace at which technology is progressing, there is a need for a focus and urgency on some topics.”
The AI company partnered with French startup Dust to host a developer-focused event in Paris, and TechCrunch had the opportunity to interview Amodei on stage. At the event, he explained the boundaries of his thinking, defending a third pathway that is neither pure optimism nor pure criticism of the topic of AI innovation and governance, respectively.
“I used to be a neuroscientist, and there I basically looked inside the real brain for a living. And now we’re looking inside the artificial brain for a living. So over the next few months, we’ll make some exciting advancements in the field of interpretability. We’re really beginning to understand how models work,” Amodei told TechCrunch.
“But it’s definitely race. It’s a race between making the model more powerful, and it’s very fast for us and incredibly fast for others – you’ll really be slow You can’t do it, right? … Our understanding needs to keep up with our ability to build things, and I think that’s the only way,” he added.
Since the first AI summit in Bletchley, UK, the tone of the discussion on AI governance has changed dramatically. This is due to the current geopolitical landscape.
“I’m not here this morning to talk about AI safety, the title of a meeting several years ago,” Vice President JD Vance said at the AI Action Summit on Tuesday. “I’m here to talk about AI opportunities.”
Interestingly, Amodei is trying to avoid this hostility between safety and opportunity. In fact, he sees safety as an opportunity.
“At the original summit, the Bletchlea Summit in the UK, there was a lot of discussion about testing and measuring various risks. And I don’t think these things slowed down technology at all,” Amodei said. He spoke about the events of “If anything, this type of measurement gave us a better understanding of our model.
And every time Amodei places some emphasis on safety, he also likes to remind everyone that humanity is very focused on building frontier AI models.
“I don’t want to do anything to reduce my promise. We offer models that people can build every day, and are used to do amazing things. And we definitely have to do it You shouldn’t stop,” he said.
“When people talk a lot about risks, I get a bit frustrated, and I say: “Yeah, no one really does a good job of laying out how great this technology is.” I say,” he added later the conversation.
Deepseek’s training costs are “not accurate”
When the conversation moved to a recent model by Chinese LLM manufacturer Deepseek, Amodei said she downplayed the technical achievements and felt that the public response was “inorganic.”
“To be honest, my response was very little. I went back in December and saw the V3, the base model of the Deepseek R1. And it was an impressive model,” he said. “The models released in December were on this kind of very normal cost-saving curve we saw with the models and other models.”
What was noteworthy was that the model didn’t leave the US-based “three or four frontier ravours.” He has listed Google, Openai and humanity as part of the frontier racebo that generally boosts the envelope with new model releases.
“It was a matter of geopolitical concern for me. I didn’t want an authoritarian government to dominate this technology,” he said.
Regarding the expected training costs of Deepseek, he dismissed the idea that training the Deepseek V3 is 100 times cheaper than training costs in the US. [it] It’s not accurate, it’s not based on facts,” he said.
Future Claude model with inference
Amodei did not unveil the new model at the event Wednesday, but he teased some of the company’s upcoming releases. Yes, it includes some kind of reasoning ability.
“We generally focus on trying to think of ourselves for better differentiated inference models. We make sure that the models are capable of being smarter. I’m worried about doing it, I’m worried about safety,” Amodei says.
One of the problems humanity is trying to solve is the challenge of model selection. For example, if you have a ChatGpt Plus account, it is difficult to know which model to choose in the model selection popup in the following message:
![](https://techcrunch.com/wp-content/uploads/2025/02/Screenshot-2025-02-12-at-5.55.18PM.png?w=680)
The same applies to developers who use large-scale language model (LLM) APIs for their own applications. They want to balance things between accuracy, speed of responses and cost.
“We’re a bit baffled by the idea that there’s a regular model and there’s an inference model, and they’re different from each other,” Amodei said. “If I’m talking to you, you don’t have two brains, so one of them responds immediately, and the other waits longer.”
According to the input, there should be a smoother transition between pre-trained models such as Claude 3.5 Sonnet and GPT-4O and models trained with reinforcement learning. Deepseek’s R1.
“I think these should exist as part of a single continuous entity, and while we may not be there yet, humanity really wants to move things in that direction. “Amody said. “We then need to make the transition to a pre-trained model smoother, not ‘There’s what’s here and what’s here,'” he added.
As large AI companies like humanity continue to release better models, Amodei believes it opens up fantastic opportunities to disrupt large companies across the world in all industries.
“We have worked with several pharmaceutical companies to write clinical research using Claude, which has reduced the time it takes to write a clinical research report for 12 weeks to 3 days.” said Amody.
“Beyond biomedical, there’s something about legal, financial, insurance, productivity, software and energy. Basically, I think there’s a disruptive innovation renaissance in the AI application space. And I We want to help with that, we want to support it all,” he concluded.
Check out the full coverage of the Artificial Intelligence Action Summit in Paris.
Source link