The Trump administration announced Thursday that it is designating artificial intelligence company Anthropic as a supply chain risk.
The Pentagon said it “officially informed Anthropic leadership that the company and its products are deemed a supply chain risk, effective immediately.” That label forces government contractors to discontinue use of Anthropic’s AI chatbot Claude in work for the US military. CEO Dario Amodei said in a statement, “We do not believe this action is legally sound, and we see no choice but to challenge it in court.” He added firms can still use Anthropic’s AI in projects unrelated to the Pentagon.
The decision follows a week in which President Donald Trump and Defense Secretary Pete Hegseth accused Anthropic of posing a national security risk. It caps months of dispute between the company and the Pentagon over safety restrictions embedded in Claude that limit its use in war‑gaming scenarios.
Anthropic had engaged with US national‑security officials but clashed with the military over how its technology could be used on the battlefield. Amodei said Anthropic and the Pentagon had discussed ways Claude might work with the military without dismantling its safeguards. However, Pentagon Chief Technology Officer Emil Michael posted on X that there is no active Department of Defense negotiation with Anthropic.
The Pentagon framed the issue as a matter of principle: the military must be able to use technology “for all lawful purposes” and will not allow a vendor to “insert itself into the chain of command by restricting the lawful use of a critical capability and putting our warfighters at risk.” Amodei countered that the narrow exceptions Anthropic sought—intended to limit surveillance and autonomous weapons—”relate to high‑level usage areas, and not operational decision‑making.”
Edited by: Sean Sinico
