The US government has warned it will remove AI startup Anthropic from Pentagon supply chains and tear up existing agreements as tensions between the two sides escalate sharply.
US Under Secretary of Defense Emil Michael on Thursday strongly rebuked Anthropic CEO Dario Amodei, calling him a “liar” with a “God-complex” and accusing him of trying “to personally control the US military” while risking national safety.
The dispute follows a meeting earlier in the week between Amodei and US Defense Secretary Pete Hegseth. Sources familiar with the meeting said Hegseth told Anthropic it had until Friday, February 27, to give the US military full access to its Claude model. If access was not granted, Hegseth reportedly threatened to cut Anthropic from government supply chains or compel it to prioritize government orders, possibly invoking the Defense Production Act — a Cold War–era authority allowing the president to direct domestic industry in the name of national defense.
Anthropic has so far refused to give Washington unrestricted access to Claude for classified military uses that could include fully autonomous lethal operations or domestic mass surveillance. Amodei said in a statement that Anthropic “understands that the Department of War, not private companies, makes military decisions” and that the company has not objected to particular military operations. He added that in a narrow set of cases the company believes AI “can undermine, rather than defend, democratic values.”
Anthropic says it will not allow Claude to be used where final targeting decisions are made without human intervention or for mass domestic surveillance, arguing those uses require guardrails that do not yet exist. Geoffrey Gertz, a senior fellow at the Center for a New American Security, told DW that invoking the Defense Production Act would be an unprecedented attempt to exert control over an AI company and could hamper Anthropic’s ability to remain a leader in responsible AI.
Anthropic has provided the Claude model to US intelligence and defense agencies since November 2024. The Wall Street Journal reported the US military used Claude during the 2026 raid on Venezuela that resulted in the capture of Nicolas Maduro; neither Anthropic nor the Defense Department commented on the report, and details of how the system was used were not clear.
Hegseth’s threat to remove Anthropic from Pentagon supply chains would carry financial consequences. In July 2025 the Department of Defense awarded Anthropic a $200 million contract to prototype “frontier AI capabilities that advance US national security.” Anthropic hailed the deal as a new chapter in its commitment to US national security while emphasizing responsible deployment and efforts to make its systems reliable, interpretable and steerable.
Anthropic was founded in 2021 by seven former OpenAI employees and has long billed itself as safety-focused. But the company has recently signaled a shift: on February 24, the same day as the Hegseth meeting, Anthropic announced it was softening a core safety policy to remain competitive with other leading AI models. The company said the policy environment had shifted toward prioritizing AI competitiveness and economic growth while federal safety discussions had not gained traction. A company spokesperson told DW the policy change was unrelated to the Pentagon talks.
The clash highlights ethical and strategic questions about government access to powerful AI. If Anthropic yields to Hegseth’s demands or if the Defense Department uses the Defense Production Act to compel cooperation, critics say it would undercut the company’s safety-first reputation and raise concerns about the deployment of AI in lethal or surveillance contexts.
The dispute also reflects the broader Trump administration approach of direct intervention in sectors deemed critical, including prior large-scale investments in chipmaking and rare-earth mining. Observers warn that more interventionist policies change the landscape for corporations and CEOs, departing from a traditionally hands-off US posture toward private-sector technology development.
The standoff is likely to prompt legal and political battles as both sides weigh national security needs, corporate autonomy, and the ethical limits of AI deployment. The article was updated to reflect statements from Anthropic CEO Dario Amodei and a comment by US Under Secretary of Defense Emil Michael.