Anthropic CEO Dario Amodei issued a statement Tuesday to “set the record straight” about the company’s alignment with the Trump administration’s AI policies, following “a recent increase in inaccurate claims regarding Anthropic’s policy stance.”
“Anthropology is built on a simple principle: AI should be a force for human progress, not a threat,” Amodei wrote. “That means building products that really work, being honest about the risks and benefits, and working with people who are serious about getting this right.”
Amodei’s reaction follows a backlash against Anthropic last week from AI leaders and top members of the Trump administration, including AI czar David Sachs and White House senior policy adviser for AI Sriram Krishnan, all of whom accused the AI giants of stoking fears of harming the industry.
The first hit came from Sacks after Anthropic co-founder Jack Clark spoke of his hopes and “appropriate fears” about AI, including that it is a powerful, mysterious and “somewhat unpredictable” creature, and not a reliable machine that can be easily learned and put to work.
Sachs’ response: “Anthropic is implementing a sophisticated regulatory acquisition strategy based on fear-mongering. Anthropic is primarily responsible for the state regulatory frenzy that is damaging the startup ecosystem.”
California Sen. Scott Wiener, the author of SB 53, the AI safety bill, defended Anthropic and criticized President Trump’s “efforts to prohibit states from acting on AI without furthering federal protections.” Mr. Sachs later doubled down, claiming that Anthropic was working with Mr. Wiener to “impose a left-wing vision for AI regulation.”
Further commentary followed, with anti-regulation activists like Sunny Madra, Groq’s chief operating officer, saying Anthropic was “causing disruption to the entire industry” by insisting on some AI safeguards rather than unlimited innovation.
tech crunch event
san francisco
|
October 27-29, 2025
Amodei said in a statement that he believes that managing the social impact of AI should be a matter of “policy over politics” and that everyone wants to ensure the United States has leadership in AI development while at the same time building technology that benefits Americans. He defended Anthropic’s work with the Trump administration on key areas of AI policy, citing examples of when he personally flirted with the president.
For example, Amodei pointed to Anthropic’s work with the federal government, including Anthropic’s Claude offering to the federal government and Anthropic’s $200 million agreement with the Department of Defense (which Amodei referred to as the “Department of the Army,” echoing President Trump’s preferred terminology, but any name change would require Congressional approval). He also noted that Anthropic has publicly praised President Trump’s AI Action Plan and supports his efforts to expand energy supplies to “win the AI race.”
Despite this cooperative attitude, Anthropic has faced pushback from its peers for straying from the Silicon Valley consensus on certain policy issues.
The company first drew the ire of Silicon Valley officials when it opposed a proposed 10-year ban on state-level AI regulation, a provision that faced widespread bipartisan opposition.
Many in Silicon Valley, including OpenAI leaders, argue that state regulation of AI will slow down the industry and hand the reins to China. Amodei countered that the real risk is that the U.S. continues to load Nvidia’s powerful AI chips into data centers in China, adding that Anthropic is restricting the sale of AI services to Chinese-controlled companies even as its revenue takes a hit.
“There are products we don’t make and risks we don’t want to take, even if we make money,” Amodei said.
Anthropic also received unpopularity from certain powerful companies when it supported California’s SB 53, a light-touch safety bill that would require major AI developers to publish safety protocols for their frontier models. Amodei noted that the bill includes a carve-out for companies with less than $500 million in annual gross revenue, which would exempt most startups from undue burdens.
“Some have suggested that we may be interested in harming the startup ecosystem in some way,” Amodei wrote, referring to Sachs’ post. “Startups are one of our most important customers. We work with tens of thousands of startups and partner with hundreds of accelerators and VCs. Claude is driving a whole new generation of AI-native companies, and it makes no sense for us to damage that ecosystem.”
Amodei said in a statement that Anthropic has grown its run rate from $1 billion to $7 billion in the past nine months while deploying AI “thoughtfully and responsibly.”
“Antropic is committed to constructive engagement on public policy issues. If we agree, we say so. If we disagree, we offer alternatives for consideration,” Amodei wrote. “We will continue to be honest and forthright and support the policies we believe are right. The risks with this technology are too great to have it any other way.”
Source link