Artificial intelligence lab Anthropic filed suit Monday challenging the Pentagon’s recent designation of the company as a “supply chain risk” after Anthropic refused to allow unrestricted military use of its AI system, Claude.
The Defense Department had demanded Anthropic remove guardrails that block uses such as autonomous weapons or domestic surveillance. Before the February 27 deadline, Anthropic CEO Dario Amodei warned Defense Secretary Pete Hegseth about the dangers of deploying untested AI in autonomous warfare and declined to lift the restrictions.
The Pentagon said technology firms should not dictate defense policy. Hegseth wrote on X that US troops “will never be held hostage by the ideological whims of Big Tech.”
Anthropic announced it would legally contest the designation, arguing it is unprecedented and unlawful and would set a dangerous precedent for other tech companies contracting with the government. Lawsuits filed in California, where Anthropic is based, and in Washington, DC, seek to overturn the designation and block its enforcement. In its California complaint, reported by the Wall Street Journal, Anthropic said the government was “seeking to destroy” the company’s economic value.
Anthropic maintains even the most advanced AI models are not reliable enough for automated weapons systems and says using its technology in surveillance would violate fundamental rights.
The Pentagon has insisted it needs full access to AI functionality for “any lawful” use and framed Anthropic’s refusal as a private company imposing policy constraints on defense operations. Observers noted the move is extreme: Anthropic is the first US-based company to be labeled a supply chain risk, a label previously applied mainly to foreign firms regarded as security threats, such as Huawei. Under US law, a supply chain risk applies to systems that could “sabotage” or “maliciously introduce” unwanted functions.
Anthropic, backed by investors including Amazon, is effectively banned from doing business with federal agencies under the designation, and the restriction could affect contracts with government contractors and suppliers. President Donald Trump issued a government-wide ban on Anthropic technology, calling the company run by “left wing nutjobs.”
Amodei said the designation has a “narrow scope” and that businesses can still use Anthropic tools for projects unrelated to the Defense Department. The company had been negotiating usage limits with the Pentagon after signing a $200 million contract in July 2025; that contract was canceled following the dispute. At the time, the contract announcement praised the advancement of “responsible AI in defense operations.”
Despite the legal battle and the risk designation, Claude remains embedded in some Defense Department operational intelligence systems. US media have reported Claude was heavily used in planning the recent US-Israel attack on Iran.
Edited by: Jenipher Camino Gonzalez
