Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

“Pokémon Pokopia” is a game about repairing a broken world – I love it

New report finds AI-powered apps struggle with long-term storage

FortiGate devices are exploited to infiltrate the network and steal service account credentials

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » How Anthropic’s safety-first ethos collided with the Department of Defense
Science

How Anthropic’s safety-first ethos collided with the Department of Defense

userBy userMarch 7, 2026No Comments7 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

On February 5th, Anthropic released Claude Opus 4.6, its most powerful artificial intelligence model. New features of this model include the ability to coordinate teams of autonomous agents (multiple AIs that divide work and complete it in parallel). Twelve days after the release of Opus 4.6, the company released Sonnet 4.6, a cheaper model that roughly matched Opus’ coding and computer skills. In late 2024, when Anthropic first introduced a model that could control your computer, you could barely control your browser. According to Anthropic, Sonnet 4.6 now provides human-level functionality for interacting with web applications and filling out forms. Both models have working memory large enough to hold a small library.

Enterprise customers now account for about 80% of Anthropic’s revenue, and the company closed a $30 billion funding round last week at a valuation of $380 billion. By any measure, Anthropic is one of the fastest-expanding technology companies in history.

But behind the big product launches and reviews, Anthropic faces serious threats. The Pentagon has indicated it may designate the company as a “supply chain risk” (often associated with foreign adversaries) unless restrictions on military use are lifted. Such a designation could effectively force Pentagon contractors to strip Claude of sensitive work.

you may like

Tensions boiled over on January 3 after US special operations forces raided Venezuela and captured Nicolas Maduro. The Wall Street Journal reported that the military used Claude during the operation through a partnership between Anthropic and defense contractor Palantir. Axios also reported that the episode further escalated already fraught negotiations over what exactly Claude could be used for. When Anthropic executives contacted Palantir to ask whether its technology had been used in the attack, the question immediately caused alarm in the Pentagon. (Antropic counters that the aid effort was intended to signal disapproval of specific operations.) Defense Secretary Pete Hegseth is “close” to severing ties, a senior administration official told Axios, adding, “We are going to make sure that they pay the price for forcing our hand in this way.”

The conflict exposed questions about whether a company founded to prevent AI catastrophe can maintain its ethical policies when its most powerful tools – autonomous agents that can process vast data sets, identify patterns and act on their conclusions – are running inside classified military networks. Is “safety first” AI a good fit for clients seeking systems that can independently reason, plan, and act at military scale?

Anthropic drew two red lines: a ban on mass surveillance of American citizens and a ban on fully autonomous weapons. CEO Dario Amodei said Anthropic would support “national defense in every way, except those that bring us closer to an authoritarian enemy.” Other major labs such as OpenAI, Google, and xAI have agreed to relax safeguards for use on the Pentagon’s unclassified systems, but their tools have not yet been run within the military’s classified networks. The Department of Defense requires that AI be used for “any lawful purpose.”

This friction tests Anthropic’s central thesis. The company was founded in 2021 by former OpenAI executives who believed the industry wasn’t taking safety seriously enough. They positioned Claude as an ethical alternative. In late 2024, Anthropic made Claude available on its Palantir platform with cloud security levels up to “confidential.” This made Claude the first large-scale language model to operate within a secret system with a public account.

Get the world’s most fascinating discoveries delivered straight to your inbox.

The question this conflict now forces is whether safety comes first as a consistent identity when technology is integrated into sensitive military operations, and whether a red line is even possible in practice. “It sounds simple: illegal surveillance of Americans,” said Emelia Probasco, a senior fellow at Georgetown’s Center for Security and Emerging Technologies. “But when you get down to it, you have an army of lawyers trying to figure out how to interpret those words.”

A blurred figure in a navy coat and hood walks in front of a glossy white wall with a sign that reads: "Live facial recognition in action" And below that is the Metropolitan Police Department logo.

The Department of Defense appears to be interested in using AI to combat surveillance. The question is, what does it look like? (Image credit: Richard Baker, via Getty Images)

Let’s consider precedent. After the Edward Snowden revelations, the U.S. government defended its mass collection of phone metadata (who called, when they called, and for how long), arguing that this type of data doesn’t have the same privacy protections as the content of the conversations. The privacy debate at the time was about human analysts searching those records. Now imagine an AI system querying vast datasets to map networks, detect patterns, and flag people of interest. Our legal framework was built for the era of human review, not machine-scale analysis.

What about our safety and national security?

Emelia Probasco, senior fellow at Georgetown’s Center for Security and Emerging Technologies

Peter Asaro, co-founder of the International Commission on Robotic Arms Control, said, “In some sense, the collection of large amounts of data that is examined by AI amounts to mass surveillance in its simplest definition.” Axios reported that the official “argued that there was significant gray area in the ‘Anthropic restrictions’ and that it would be unfeasible for the Department of Defense to have to negotiate with the company on individual use cases.” Asaro offers two interpretations of the complaint. A generous interpretation would be that it is simply impossible to define surveillance in the age of AI. The pessimism, Asaro said, is that “they really want to use them for mass surveillance and autonomous weapons, but they don’t want to say that, so they call it a gray area.”

What to read next

Another Anthropic hurdle, autonomous weapons, has a manageably narrow definition: systems that select and engage targets without human supervision. But Asaro sees an even more troubling gray area. He pointed to the Israeli military’s Lavender and Gospel systems, which reportedly use AI to generate large target lists and seek approval from human operators before carrying out attacks. “Essentially, we automated the targeting element; [that] we are very concerned and [that is] “Even if they deviate from a narrow, precise definition, they are closely related,” he says. The question is whether Claude, operating within Palantir’s systems on classified networks, could be doing similar things – processing intelligence, identifying patterns, surfacing persons of interest – without anyone at Anthropic being able to say exactly where the analytical work ends and the targeting begins.

Operation Maduro puts that very distinction to the test. “If you’re collecting data and intelligence to identify targets, but a human is deciding, ‘Okay, here’s a list of targets that we’re actually going to bomb,’ that requires the level of human oversight that we’re trying to require,” Asaro said. “On the other hand, we still rely on these AIs to select these targets, and how much scrutiny and how deep we dig into the validity or legality of those targets is another question.”

Anthropic may be trying to draw a narrower line between mission planning, where Claude helps identify bombing targets, and the day-to-day work of processing documents. “There are all these kinds of boring applications of large language models,” Probasco says.

However, the features of Anthropic’s model can make it difficult to maintain these distinctions. Agent teams in Opus 4.6 can split complex tasks and work in parallel. This is an advance in autonomous data processing that will transform military intelligence. Both Opus and Sonnet allow you to interact with applications, fill out forms, and work across platforms with minimal supervision. These capabilities, which drive Anthropic’s commercial advantage, make Claude highly attractive within sensitive networks. Models with huge working memories can also hold entire intelligence dossiers. A system that can coordinate autonomous agents to debug code bases can coordinate autonomous agents to map rebel supply chains. The more competent Claude becomes, the thinner the line becomes between the analytical drudgery that Anthropic happily supports and the surveillance and targeting he has promised to reject.

As Anthropic pioneers the frontiers of autonomous AI, the military’s demand for these tools will only increase. Probasco worries that the conflict with the Pentagon creates a false dichotomy between safety and national security. “What about our security and national security?” she asks.

This article first appeared in Scientific American. ©ScientificAmerican.com. All rights reserved. Follow us on TikTok, Instagram, X, and Facebook.


Source link

#Biotechnology #ClimateScience #Health #Science #ScientificAdvances #ScientificResearch
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleAnthropic discovers 22 vulnerabilities in Firefox using Claude Opus 4.6 AI model
Next Article ‘Warming trends have almost doubled since 2014’: The rate of global warming has accelerated more than ever in the past decade
user
  • Website

Related Posts

A 2,000-year-old Phoenician coin was used as bus fare in Britain, but ‘how it was acquired remains an eternal mystery’

March 9, 2026

California wildfire season is changing, with an increase in fires following traditional high-risk periods, study finds

March 9, 2026

The world’s smallest QR code can store thousands of years of data, but you’ll need an electron microscope to see it

March 9, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

“Pokémon Pokopia” is a game about repairing a broken world – I love it

New report finds AI-powered apps struggle with long-term storage

FortiGate devices are exploited to infiltrate the network and steal service account credentials

KadNap malware infects over 14,000 edge devices, powering stealth proxy botnet

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Castilla-La Mancha Ignites Innovation: fiveclmsummit Redefines Tech Future

Local Power, Health Innovation: Alcolea de Calatrava Boosts FiveCLM PoC with Community Engagement

The Future of Digital Twins in Healthcare: From Virtual Replicas to Personalized Medical Models

Human Digital Twins: The Next Tech Frontier Set to Transform Healthcare and Beyond

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2026 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.