President Trump ordered federal agencies to stop using AI products from Anthropic and the Pentagon moved to designate the company a national security supply-chain risk, sharply escalating a dispute over how the military may use advanced AI.
Hours after the president’s announcement, rival OpenAI said it had reached an agreement to provide its AI technology for classified Defense Department networks. The moves capped an acrimonious fight between Anthropic and the Pentagon over whether the company could restrict its models from being used for domestic mass surveillance or to power fully autonomous weapons as part of a potential Pentagon contract worth up to $200 million.
On Truth Social, Trump denounced Anthropic and directed “EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology,” saying there would be a six-month phaseout. Defense Secretary Pete Hegseth followed by saying he would label Anthropic a supply-chain risk to national security and blacklist it from working with the U.S. military or its contractors, using the department’s rebranded name, the “Department of War.” Hegseth said Anthropic would be allowed to continue services for up to six months to enable a transition to “a better and more patriotic service.”
Anthropic said it would challenge the designation in court, calling it legally unsound and a dangerous precedent for companies that negotiate with the government. The company argued Hegseth lacks statutory authority to bar contractors from using Anthropic’s Claude model for non-Defense Department customers and said a supply-chain risk designation should only apply to Department of War contracts.
Anthropic said it had negotiated in good faith for months and supports all lawful national security uses of AI except for two narrow exceptions it has insisted on: barring fully autonomous weapons and prohibiting its models from enabling mass domestic surveillance of Americans. The company said those limits reflect safety concerns — current frontier models are not reliable enough for fully autonomous weapons and using them that way would endanger service members and civilians — and civil-rights concerns about mass surveillance.
Anthropic CEO Dario Amodei has publicly defended the company’s stance, saying the Department of War, not private firms, makes military decisions, but that certain uses are outside what today’s technology can safely do. Pentagon undersecretary for research and engineering Emil Michael sharply criticized Amodei on X, accusing him of dishonesty and seeking to control the military. Michael and other Pentagon officials have said federal law and department policy already bar domestic mass surveillance and the use of AI in autonomous weapons without proper oversight.
The standoff included threats from the government to invoke the Korean War–era Defense Production Act to compel Anthropic to allow certain uses of its technology while also warning of blacklisting. Hegseth accused Anthropic of trying to seize veto power over military operational decisions and said the department must have full access to models for every lawful defense purpose.
OpenAI CEO Sam Altman, who had earlier expressed concerns similar to Anthropic’s about certain military uses of AI, said OpenAI’s agreement with the Defense Department includes safeguards barring domestic mass surveillance and requiring human responsibility for use of force — the same “red lines” Anthropic had sought. OpenAI, Google and Elon Musk’s xAI already have Defense Department contracts; xAI was approved earlier in the week for classified settings. OpenAI said it would negotiate to deploy its models in classified systems with exclusions preventing use for U.S. domestic surveillance or powering autonomous weapons without human approval.
Altman reportedly sent an internal note saying OpenAI was seeking a deal to deploy models in classified systems with those exclusions. The Defense Department did not comment publicly on his remarks. In a public comment, Altman said companies should work with the military “as long as it is going to comply with legal protections” and the shared red lines.
Independent experts called the public clash unusual for Pentagon contracting, where suppliers typically do not dictate how the Defense Department uses products. Jerry McGinn of the Center for Strategic and International Studies said the dispute reflects the novel and untested nature of AI and noted it is atypical for companies to negotiate use cases for every contract.
The broader context: Anthropic, valued at roughly $380 billion and planning an initial public offering, generates about $14 billion in revenue. While the potential Pentagon contract is relatively small, the dispute with the administration could influence investor sentiment and other licensing deals. Anthropic has said its valuation and revenue have grown since it pushed back on Pentagon demands.
The showdown highlights a growing fault line over whether AI firms can put contractual limits on government uses of their tools and how to balance national security needs with safety and civil-rights concerns as AI is integrated into defense systems.
NPR’s Bobby Allyn contributed to this report.