Claude Said No to the Pentagon and Meant It

> Breaking News Update — 12 March 2026

Since this piece was first published, Anthropic has escalated significantly. The company has now filed two separate lawsuits against the US Department of Defense and the Trump administration, challenging both the supply chain risk designation and the legal authority used to impose it. One case has been filed in federal district court in California. The other is in the DC Circuit appeals court. Full details below.


Update: Anthropic Is Suing the Pentagon

Anthropic has taken the extraordinary step of filing two lawsuits against the US government, and it is worth being precise about what it is arguing and why.

The supply chain risk designation, which Anthropic calls “unprecedented and unlawful,” has now led to terminated government contracts and is beginning to affect its relationships with private partners who have Pentagon exposure. That is the commercial injury that gives Anthropic legal standing to sue. This is no longer just a public dispute about ethics. It is a formal legal challenge.

The first argument is a First Amendment claim. Anthropic says the government is punishing it for its policy position, specifically its position that Claude should not be used for fully autonomous lethal weapons or mass domestic surveillance of American citizens. That is a viewpoint. The government is retaliating against a company for holding it. Anthropic argues that is a violation of constitutional free speech protections.

The second argument challenges the legal scope of the supply chain risk designation itself. That tool has historically been used against foreign entities, most notably Chinese technology companies deemed to pose national security risks. Applying it to a US-based company, in apparent retaliation for that company’s refusal to loosen its own ethical policies, stretches the designation well beyond its original legal purpose. Anthropic argues this goes beyond what the law allows.

The Pentagon’s position is straightforward: national defence strategy must be governed by US law, not by a contractor’s internal policies. Defence officials say they need full flexibility to use AI for “any lawful use,” and argue that Anthropic’s restrictions could limit tools that might be critical in wartime. They insist they have no intention of using AI for illegal mass surveillance or fully autonomous weapons. Which, of course, makes you wonder why removing the restrictions was so important to them.

President Trump has backed the designation and ordered federal agencies to phase out use of Claude over the coming months.

Why does this matter beyond the specific case? Because this is the first major legal test of a question the AI industry has been circling for a while: can a technology company embed ethical constraints into its products and legally defend them when a government client demands they be removed? And can a government retaliate against a company for refusing, using national security tools designed for foreign adversaries?

The outcome will not just affect Anthropic. Every AI company with government clients, or ambitions of having them, is watching this closely. The precedent that gets set here will shape what it means to sell AI to the state for the next decade.


Last week, Anthropic did something that most companies in its position would not do. It looked at a $200 million government contract, looked at what the Pentagon was demanding in order to keep it, and said no.

The fallout was immediate. Trump ordered every federal agency to stop using Anthropic’s technology. Pete Hegseth, the Defence Secretary, labelled Anthropic a “supply chain risk,” a designation normally reserved for companies linked to foreign adversaries. Within hours of the deadline passing, OpenAI announced it had stepped in and struck its own deal to fill the gap on classified networks.

And somewhere in all of that drama, I think the actual story got a bit lost. So let me try to tell it properly.

How We Got Here

Anthropic signed its $200 million contract with the US Department of Defense in July 2025. Claude was the first AI model to be cleared for deployment on classified government networks. That is not nothing. That is a significant endorsement of both the capability and the trustworthiness of the technology.

Built into Anthropic’s acceptable use policy, and therefore into the contract, were two restrictions. Claude would not be used for mass domestic surveillance of American citizens. And Claude would not be used to power fully autonomous weapons systems, meaning systems that can select and engage targets without a human in the decision chain.

For months, this sat fine. By Anthropic’s own account, and crucially, by the account of people inside the Department of Defense who were actually using Claude, neither of those restrictions had ever been triggered. Not once. The tools were being used for intelligence analysis, operational planning, simulations, cyber operations. All of it within the agreed terms.

Then, at some point in early 2026, the Pentagon decided it wanted those restrictions removed. Not because it needed to do those things. But because it did not want a private company telling it what it could and could not do with technology it was paying for.

The demand was simple: agree to let us use Claude for “all lawful purposes” with no carve-outs. Anthropic said no. Defence Secretary Hegseth set a deadline of 5:01pm on Friday 28 February. The deadline passed without agreement. The contract was terminated. The supply chain risk designation was issued. Trump posted on Truth Social. And Sam Altman announced OpenAI’s deal before the day was out.

What Anthropic Actually Said

I want to be precise here because the nuance matters. Anthropic’s CEO Dario Amodei was clear about why those two restrictions exist. On autonomous weapons: frontier AI systems today are simply not reliable enough to make life-or-death targeting decisions without human oversight. He offered to collaborate with the Pentagon on research to improve that reliability. The Pentagon declined.

On surveillance: modern AI can stitch together individually harmless data, location records, browsing history, social connections, into a comprehensive portrait of any person’s life. The argument is not that surveillance is always wrong. It is that giving AI the unconstrained ability to profile American citizens at scale is a line that should not be crossed without a great deal more thought than a contract amendment allows for.

Amodei also pointed out what he called the logical contradiction in the government’s position. The supply chain risk designation treats Anthropic as a national security threat. The Defence Production Act threat, which was also floated, treats Claude as essential to national security. You cannot hold both of those positions simultaneously. They cancel each other out.

Anthropic said it clearly: “No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons.”

The Valuation Context

There is a number worth sitting with here. Anthropic is currently valued at around $380 billion. That $200 million contract represents approximately 0.05% of that valuation. In the grand scheme of Anthropic’s finances, it is a rounding error.

That changes the calculus a bit. This was never going to be an existential threat to Anthropic as a business. The bigger risk is the supply chain risk designation and what it might mean for Anthropic’s enterprise customers who have Pentagon exposure. If a company’s general counsel decides that using Claude creates too much risk because of that designation, Anthropic could lose relationships worth far more than $200 million.

The company’s bet is that standing firm is worth that risk. That the long-term reputational and commercial value of being the AI company that drew a line in the sand, and held it, outweighs the short-term cost of a government contract and some enterprise churn. That is a bold position. I think it is the right one.

Where OpenAI Fits In

OpenAI moved fast. Within hours of Anthropic’s contract being terminated, Altman had announced his own deal. He was careful to say publicly that OpenAI has the same red lines as Anthropic: no domestic mass surveillance, no autonomous weapons without human approval. Those restrictions are written into OpenAI’s agreement and backed by controls.

Auto-generated description: Two groups of people are transferring items labeled Missile and Camera from a computer labeled Claude to another labeled ChatGPT across a bridge.

But, and this is the bit that caught my eye, Altman simultaneously called Anthropic’s Super Bowl ads “clearly dishonest” and suggested that Anthropic “serves an expensive product to rich people.” He also said, on the same day he stepped into a contract that had just been terminated because of a values dispute, that it is important for AI companies to work with the military.

I am not saying OpenAI is wrong to take the contract. Their stated position on autonomous weapons and surveillance is the same as Anthropic’s. But there is something a bit uncomfortable about the speed with which they moved in, and the simultaneous criticism of the company whose position they claim to share. Make of that what you will.

What This Means for Everyone Else

There is a broader question here that goes beyond Anthropic and beyond this specific contract. It is about what happens when governments decide that the terms of service of a private company are incompatible with national security needs.

This is the first time the US government has ever designated an American company a supply chain risk in apparent retaliation for declining to remove its own safety restrictions. That is unprecedented. If it stands up legally, it sets a precedent that other AI companies will now be judged against. If you want a government contract, you may need to be willing to remove your guardrails on demand.

Some people on Reddit and elsewhere have argued that Anthropic is not the hero here, because it still accepts broad military use of Claude, still took the contract in the first place, and the restrictions it drew lines around are limited to US citizens. That critique has some merit. The scope of what Anthropic was willing to accept before the sticking point was quite broad.

But the two things it refused to budge on, mass surveillance of ordinary people and autonomous systems that can kill without human oversight, those are not narrow technicalities. They are exactly the things that most of us, if asked in the abstract, would say AI should never be used for.

The fact that Anthropic was willing to lose the money over them matters. Even if the maths made it easier than it looked.

A Visible Switch

There is one more thing worth noting. In the days since this story broke, there has been a visible and measurable movement of people switching from ChatGPT to Claude. Search “cancel ChatGPT” on any social media platform and you will see it happening in real time. Some of it is principled. Some of it is backlash against OpenAI’s speed of opportunism. Some of it is people who had already been thinking about switching and needed a nudge.

Anthropic also launched a memory import tool at claude.com/import-memory that makes switching much easier, which I cover separately. The timing is either very fortunate or very deliberate. Probably a bit of both.

The AI industry is converging on some genuinely important questions about values, about who sets the rules, and about what companies owe the people who use their products. This week, Anthropic gave a clear answer to where it stands. Whether that answer holds, and what it costs them, we will find out over the months ahead.


I discussed this topic on the latest episode of Prompt Fiction. Listen to Chapter 12, Part 2 here.

Scott Quilter | Co-Founder & Chief AI & Innovation Officer, Techosaurus LTD