The Pentagon is headed for a showdown with Anthropic, one of the world’s leading AI companies, after Anthropic’s CEO refused a Defense Department demand to loosen safety restrictions on its model or face being cut off from military work.
Hundreds of millions of dollars in contracts and access to advanced AI tools are at stake. Here’s what to know and what could happen next.
Pentagon and Anthropic clash over acceptable military uses of AI
For months, Anthropic CEO Dario Amodei has said his company’s AI, Claude, must not be used for domestic mass surveillance or to power fully autonomous weapons that select and kill targets without human approval. Amodei has called those uses “entirely illegitimate” and “bright red lines” for the company.
The Pentagon says it does not intend to use Anthropic’s tools for surveillance or autonomous weapons, but argues contractors should not unilaterally decide how government technology is used. Defense officials have insisted that AI vendors must allow the U.S. government to use their tools “for all lawful purposes.” A senior Pentagon official told NPR that legality is the Pentagon’s responsibility as the end user.
Amodei rejected the Pentagon’s latest contract changes on Thursday. In a statement, he said Anthropic believes in using AI to defend democracies but that a narrow set of uses—domestic mass surveillance and autonomous weapons—can undermine democratic values and are outside what today’s technology can safely and reliably do. He said those uses were never part of existing contracts and should not be added now.
Tensions escalated after a meeting this week between Defense Secretary Pete Hegseth and Amodei, in which Hegseth is reported to have threatened consequences if Anthropic did not comply. People familiar with the meeting said Hegseth suggested canceling Anthropic’s roughly $200 million contract. A Pentagon official also warned of actions ranging from forcing Anthropic to make its model available to the government to effectively blacklisting the company from U.S. military work.
“We cannot in good conscience accede to their request,” Amodei wrote, while adding that Anthropic hopes the Pentagon will reconsider given the value its technology provides to the armed forces.
A firm deadline from the Pentagon
Pentagon spokesman Sean Parnell posted on X that Anthropic had until 5:01 p.m. ET on Friday to accept the department’s terms, or the Pentagon would terminate the partnership and deem Anthropic a supply chain risk for the Department of War (the Pentagon’s rebranded name in those posts).
Anthropic responded that the overnight contract language the Pentagon sent “made virtually no progress” on preventing Claude’s use for mass surveillance or in fully autonomous weapons. The company said what was framed as a compromise was paired with legal clauses that could allow those safeguards to be ignored. Anthropic said it remains ready to continue negotiations and is committed to operational continuity for the Department and its warfighters.
What “supply chain risk” could mean
Calling Anthropic a supply chain risk would be an unusual move. Geoffrey Gertz, a senior fellow at the Center for a New American Security, noted the designation has typically been used against foreign-adversary technology, such as Huawei. The practical effects are uncertain: the designation could bar Pentagon contractors from using Anthropic’s tools in Defense work, or it could prohibit the use of Anthropic tools more broadly among contractors — the latter being especially harmful to Anthropic.
The Pentagon has also threatened to invoke the Defense Production Act to compel Anthropic to remove guardrails. That would be an extraordinary use of the law, which is normally reserved for rare emergency situations and can give the government control over commercial sectors when needed. Using it to force a U.S. company to loosen safety safeguards would be highly unusual.
Gertz observed the two threats are somewhat contradictory: Anthropic is portrayed as both so risky it should be excluded and so essential it must be compelled to remain part of defense systems.
This dispute is likely to continue
The Pentagon’s contract with Anthropic is worth up to $200 million, a relatively small share of Anthropic’s reported $14 billion in revenue. Anthropic was the first AI company cleared for classified government use after officials judged its model advanced and secure for sensitive military applications. The Defense also has similar relationships with other AI firms, including Google, OpenAI and xAI.
If the Pentagon simply cancels the contract, the immediate dispute might end there. But if the department seeks to force Anthropic to strip guardrails or imposes a broad supply-chain-risk designation, legal battles are likely. Gertz predicts the company would push back legally if the Pentagon escalates.
NPR’s Bobby Allyn contributed to this report.