
Anthropic Inc. fired back Friday after U.S. Secretary of Defense Pete Hegseth directed the Pentagon to designate artificial intelligence (AI) startups as a “supply chain risk.”
“This action follows months of stalled negotiations over two exceptions we requested for the legal use of our AI model Claude: domestic mass surveillance of Americans and fully autonomous weapons,” the company said in a statement.
“No threat or punishment from the Department of the Army will change our country’s position on domestic mass surveillance or fully autonomous weapons.”
US President Donald Trump said in a social media post on Truth Social that he will order all federal agencies to phase out the use of human technology within the next six months. A subsequent X post from Hegseth required all contractors, suppliers, and partners doing business with the U.S. military to immediately cease “commercial activity with Anthropic.”
“In conjunction with the President’s direction asking the federal government to cease all use of Anthropic’s technology, I am directing the Department of the Army to designate Anthropic as a supply chain risk to national security,” Hegseth wrote.
The designation comes after weeks of negotiations between the Department of Defense and Anthropic over the use of the AI model by the U.S. military. In a post published this week, the company argued that the contract should not encourage domestic mass surveillance or the development of autonomous weapons.
“We support the use of AI for legitimate foreign intelligence and counterintelligence missions,” Antropic said. “However, the use of these systems for domestic mass surveillance is incompatible with democratic values. Mass surveillance with AI poses significant new risks to our fundamental freedoms.”
The company also criticized the US Department of the Army’s (DoW) position to remove any safeguards that may exist and only work with AI companies that allow “all lawful uses” of the technology as part of its efforts to build an “AI-first” combat force and strengthen national security.
“Diversity, equity, inclusion, and social ideology have no place in the DoW, so AI models that incorporate ideological ‘tuning’ that interfere with their ability to provide objectively truthful responses to user prompts should not be employed,” a memorandum issued by the Department of Defense last month said.
“The Department must also utilize models that are not subject to usage policies that may limit legitimate military applications.”
In response to the designation, Anthropic called the designation “legally unsound” and said it would set a dangerous precedent for U.S. companies negotiating with the government. It also noted that the supply chain risk designation under 10 USC 3252 applies only to Claude’s use as part of a DoW contract and does not affect Claude’s use to provide services to other customers.
Hundreds of Google and OpenAI employees signed an open letter calling on the companies to cooperate with Anthropic in its conflict with the Department of Defense over military applications for AI tools like Claude.
The conflict between Anthropic and the U.S. government arose after OpenAI CEO Sam Altman said OpenAI had reached an agreement with the U.S. Department of Defense (DoD) to deploy models on classified networks. It also called on the Department of Defense to extend these conditions to all AI companies.
“The safety and broader benefit-sharing of AI is core to our mission. Two of our most important security principles are the prohibition of domestic mass surveillance and human responsibility for the use of force, including autonomous weapons systems,” Altman said in a post on X. “The DoW agrees with these principles and reflects them in our laws and policies, and we have included them in our agreements.”
Source link
