The Trump administration has designated AI company Anthropic a supply chain risk, a move that could force government contractors to stop using the company’s chatbot, Claude. The Pentagon said it “officially informed Anthropic leadership the company and its products are deemed a supply chain risk, effective immediately.”
The decision follows a week of public accusations from President Donald Trump and Defense Secretary Pete Hegseth, who said Anthropic endangered national security after CEO Dario Amodei resisted removing safeguards that limit uses tied to mass surveillance of Americans and fully autonomous weapons. Trump gave the military six months to phase out Claude, which is already embedded in many military and national security systems.
Amodei said Anthropic will challenge the designation in court, calling the action legally unsound. He argued the exceptions Anthropic sought were narrow, affecting high-level use areas rather than operational decision-making, and said there had been “productive conversations” with the Pentagon about continued use or a “smooth transition.” He emphasized ensuring warfighters are not deprived of tools during major combat operations.
The Pentagon defended its move as grounded in the need for the military to “use technology for all lawful purposes” and to prevent vendors from restricting lawful use of capabilities, which it said could put warfighters at risk. How broadly the department will apply the risk designation remains unclear. Anthropic said a Pentagon notification suggests the designation applies only when Claude is used “as a direct part of” military contracts. Microsoft said it can “continue to work with Anthropic on non-defense related projects” after legal review.
Some defense contractors have already cut ties. Lockheed Martin said it will follow the President’s and Defense Department’s direction and seek other large language model providers, adding it expects minimal impacts because it does not rely on a single LLM vendor.
The designation drew criticism for repurposing a rule meant to address threats from foreign adversaries. Federal codes define supply chain risk as threats from actors who might sabotage, introduce unwanted functions, or subvert systems to disrupt, degrade, or spy on them. Sen. Kirsten Gillibrand called the move “a dangerous misuse of a tool meant to address adversary-controlled technology.” Neil Chilson, former FTC chief technologist now at the Abundance Institute, called it “massive overreach” that could hurt the U.S. AI sector and military access to top technology.
A group of former defense and national security officials, including ex-CIA director Michael Hayden and retired service leaders, sent lawmakers a letter saying the authority’s use against a domestic company is a “profound departure” from its intent and warns of far-reaching consequences for penalizing a U.S. firm for refusing to remove safeguards against domestic surveillance and fully autonomous weapons.
Meanwhile, Anthropic has seen a surge in consumer interest. The company reported more than a million people signing up for Claude each day last week, propelling it ahead of OpenAI’s ChatGPT and Google’s Gemini as the top AI app in Apple’s store in over 20 countries.
The dispute has intensified Anthropic’s rivalry with OpenAI, founded in part by ex-OpenAI leaders including Amodei. After the Pentagon’s initial actions, OpenAI struck a deal to replace Anthropic’s services in classified military settings. OpenAI CEO Sam Altman later said he regretted rushing a deal that “looked opportunistic and sloppy.” Amodei apologized for an internal message that criticized OpenAI and suggested Anthropic was being punished for not offering political praise.