US Defense Secretary Pete Hegseth’s reported ultimatum to AI startup Anthropic — to give the Pentagon full access to its Claude model or face exclusion from defense supply chains or compelled cooperation under the Defense Production Act (DPA) — highlights growing tensions over control of advanced AI.
According to sources, Hegseth summoned Anthropic CEO Dario Amodei to Washington and demanded unrestricted Pentagon access to Anthropic’s generative AI. An Anthropic spokesperson confirmed the meeting and said Amodei expressed appreciation for the Department’s work and continued “good-faith conversations about our usage policy to ensure Anthropic can continue to support the government’s national security mission in line with what our models can reliably and responsibly do.”
Sources familiar with the talks told the Financial Times that Hegseth made two explicit threats: to cut Anthropic out of the Pentagon supply chain, and to invoke the Cold War–era DPA, which can be used to direct domestic industry production and prioritize government orders. A senior Pentagon official told the FT the DPA could be used to compel Anthropic to be used by the Pentagon “regardless of if they want to or not.”
Anthropic has resisted providing complete access for classified military uses, particularly for operations that could involve final targeting decisions made without human intervention or for domestic mass surveillance. The company has marketed itself as safety-first and is understood to view some military applications as inconsistent with its long-term interests given current technology and safety guardrails.
Geoffrey Gertz, a senior fellow at the Center for a New American Security, warned that invoking the DPA would be an unprecedented exertion of control over an AI company and could damage Anthropic’s ability to continue leading in responsible AI. He said aggressive government actions that curtail a company’s markets could backfire on broader AI policy goals.
Anthropic has provided its Claude model to US intelligence and defense agencies since November 2024. The Wall Street Journal reported the US military used Claude during the 2026 raid that resulted in the capture of Venezuela’s Nicolás Maduro, though details and official comment from Anthropic and the Defense Department were not available. The company also holds a $200 million DoD contract awarded in July 2025 to prototype frontier AI capabilities for national security.
Anthropic has emphasized responsible deployment in its government work, saying powerful technologies carry great responsibility and that reliability, interpretability and steerability are essential in government contexts. The company was founded in 2021 by former OpenAI staff and is led by CEO Dario Amodei, who has described Anthropic’s mission as ensuring AI is a force for human progress, not peril.
Despite that positioning, Anthropic announced on Feb. 24 that it was softening aspects of its core safety policy to remain competitive with other leading AI models. The company said the policy environment has shifted toward prioritizing AI competitiveness and economic growth while safety-focused discussions have not yet advanced meaningfully at the federal level. Anthropic’s spokesperson said the policy change was unrelated to Pentagon negotiations. The move comes amid intense competition from OpenAI, Google and others, and against a backdrop of limited federal AI regulation.
The dispute raises ethical questions about whether Anthropic could retain a safety-first stance if it accedes to Pentagon demands or if the government uses the DPA to direct its technology toward controversial military or domestic-surveillance uses. It also illustrates a broader shift toward more interventionist US industrial policy under the Trump administration, which has previously made large direct investments in strategic sectors — such as an $8.9 billion stake in Intel and investments in rare-earth suppliers — indicating willingness to intervene where it sees critical national interests.
Observers say CEOs and companies are navigating a new landscape in which the government is more likely to directly influence corporate decisions in strategically important sectors. Critics argue this departs from the traditionally hands-off US approach that left technological development largely to the private sector, while supporters contend stronger government leverage is needed to secure national defense priorities.
Edited by: Ashutosh Pandey
