A federal judge in San Francisco said Tuesday that the government’s ban on Anthropic appears to be punitive after the AI company publicly clashed with the Pentagon over potential military uses of its Claude model. U.S. District Judge Rita F. Lin made the remark at the start of a hearing on Anthropic’s request for a preliminary injunction in one of its lawsuits challenging the Pentagon’s designation of the company as a “supply chain risk,” a move that has effectively blacklisted it from government business.
“It looks like an attempt to cripple Anthropic,” Lin said, expressing concern that the government might be retaliating against the company for speaking out. She said she expected to rule within a few days on whether to temporarily halt the ban while the court considers the case on the merits.
The hearing in the U.S. District Court for the Northern District of California is the latest development in a dispute between one of the leading AI firms and the Trump administration, with broader implications for government use of AI. Anthropic CEO Dario Amodei announced in late February that the company would not allow Claude to be used for autonomous weapons or to surveil American citizens. President Trump then ordered all U.S. government agencies to stop using Anthropic’s products.
Earlier this month, the Pentagon labeled Anthropic a “supply chain risk,” citing national security concerns — a designation normally reserved for entities seen as potential foreign adversaries. Anthropic has filed two federal lawsuits, in the Northern District of California and the D.C. federal appeals court, arguing the designation is illegal retaliation for its public stance on AI safety and that it will cost the company customers and revenue by barring Pentagon contractors from working with it. The suits claim the administration violated Anthropic’s First Amendment rights and exceeded the legal scope of supply chain risk authority.
At Tuesday’s hearing, Anthropic’s lawyers noted this appears to be the first time such a designation has been applied to a U.S. company. Judge Lin acknowledged the Pentagon’s authority to choose which AI products it uses but questioned whether the government broke the law by banning its agencies from Anthropic and by Defense Secretary Pete Hegseth’s announcement that prospective Pentagon contractors must sever ties with the company.
Lin found the actions “troubling,” saying they did not seem narrowly tailored to address the stated national security concerns — which, she suggested, could be addressed simply by the Pentagon ceasing to use Claude — and instead resembled punishment of Anthropic. Government lawyers countered that the measures were not retaliatory but were based on legitimate concerns about Anthropic’s disagreement over how its model could be used, and on theoretical risks that future updates to Claude could pose national security problems.
Anthropic did not immediately respond to a request for comment. A Pentagon spokesperson declined to comment on ongoing litigation.