By CEO Sam Altman’s own admission, OpenAI’s deal with the Department of Defense is “definitely rushed” and “the optics aren’t looking good.”
After Anthropic’s negotiations with the Pentagon broke down on Friday, President Donald Trump directed federal agencies to stop using Anthropic’s technology after a six-month transition period, and Defense Secretary Pete Hegseth said he would designate the AI company as a supply chain risk.
OpenAI then quickly announced that it had reached its own agreement to deploy the model in a classified environment. Anthropic said it draws the line when it comes to fully autonomous weapons and the use of its technology in domestic mass surveillance, and Altman said the same line exists for OpenAI, but there were some obvious questions. “Was OpenAI honest about its security measures?” Why was Anthropic able to reach an agreement when it could not?
So while OpenAI executives defended the agreement on social media, the company also published a blog post outlining its approach.
In fact, the post lists three areas where OpenAI’s models cannot be used: domestic mass surveillance, autonomous weapons systems, and “high-stakes automated decisions (such as systems like social credit).”
The company said that in contrast to other AI companies that have “reduced or removed safety guardrails and primarily rely on usage policies as the primary safety measure in national security deployments,” OpenAI’s contract protects red lines “through a broader, layered approach.”
“We retain full discretion over our safety stack, deploy it via the cloud, have authorized OpenAI personnel in the know, and have strong contractual protections,” the blog said. “This is all in addition to the already strong protections of U.S. law.”
tech crunch event
San Francisco, California
|
October 13-15, 2026
The company added, “While we do not know why Anthropic did not reach this agreement, we hope that Anthropic and more laboratories will consider this agreement.”
After the post was published, TechDirt’s Mike Masnick argued that the agreement “absolutely authorizes domestic surveillance” because it says the collection of personal data is subject to Executive Order 12333 (along with a number of other laws). Masnick described the order as “a way for the NSA to cover up its domestic surveillance by wiretapping *outside the United States* and capturing communications, even if they contain information from or about Americans.”
Katrina Mulligan, Director of National Security Partnerships at OpenAI, argued in a post on LinkedIn that much of the debate over contract language assumes that “the only thing standing between the American people and the use of AI for domestic mass surveillance and autonomous weapons is a single-use policy clause in a single contract with the Department of the Army.”
“None of this works that way,” Mulligan said, adding, “Deployment architecture is more important than contract language.” […] By limiting deployment to cloud APIs, we prevent our model from being directly integrated into weapon systems, sensors, or other operational hardware. ”
Altman also responded to questions about the deal with Company X, admitting that it was done too quickly and caused a huge backlash against OpenAI (on Saturday, Anthropic’s Claude overtook OpenAI’s ChatGPT in Apple’s App Store). So why do it?
“We really wanted to calm things down and thought the deal that was offered was a good one,” Altman said. “If we’re right, and this leads to a detente between Dow and the industry, we’ll look like a genius and a company that has gone to great lengths to help the industry. If not, we’ll continue to be characterized as…” […] I was in a hurry and careless. ”
Source link
