The US government’s decision to bar AI firm Anthropic from its entire supply chain has prompted a proposal from a German Social Democratic Party (SPD) politician who says it’s a “once-in-a-lifetime chance” for Germany and Europe.
Anthropic, valued at about $380 billion (€327 billion), developed the Claude family of large language models with a range of applications, including drone piloting. The company partnered with the US military and government from late 2024, and US media have reported Claude was used in operations in Venezuela and in strikes on Iran, though details remain unclear.
In late February, US Defense Secretary Pete Hegseth demanded Anthropic remove contract clauses that prohibit use of Claude for mass surveillance and fully autonomous weapons. When Anthropic co-founder and CEO Dario Amodei refused, Hegseth labeled the company a “supply chain risk”—a designation never before applied to a US firm—and President Donald Trump ordered all government agencies to stop using Anthropic’s services. Anthropic has filed a legal challenge to the designation, arguing the blacklisting could cost it billions and cause reputational damage.
Anthropic said it refused the Trump administration’s demands for two reasons: it believes current frontier AI models are not reliable enough for fully autonomous weapons, and that mass domestic surveillance would violate fundamental rights.
Matthias Mieves, the SPD’s digital policy spokesperson, wrote to Chancellor Friedrich Merz and European Commission President Ursula von der Leyen proposing Europe offer to host Anthropic to develop its “human-centered, trustworthy AI” models. Mieves argued Anthropic’s stance on individual protection aligns with European legal norms.
Mieves proposed three measures for German and EU governments:
– Offer a major European city (he suggested Berlin, Paris, or Munich) as a potential new base for Anthropic.
– Create an alliance of European investors, including public bodies such as pension funds and Euro-bonds, to strengthen “digital sovereignty.”
– Provide guarantees from the European Commission and major European countries to partner with Anthropic going forward.
Experts in the AI field were skeptical. Daniel Abbou, head of Germany’s AI industry association BVKI, called the plan “very, very nice” but unrealistic. He noted Anthropic’s main investors include Amazon and Google, and the company is deeply embedded in US cloud infrastructure and capital markets. Moving to Europe could jeopardize contracts and liquidity. Abbou also doubted that losing US government contracts would immediately collapse Anthropic’s US business, pointing to strong usage of Claude and Opus 4.6.
Abbou suggested a more viable approach: establish an AI lab in Europe in collaboration with Anthropic to create a “reverse brain-drain” and keep European talent from moving to US firms. He emphasized Europe’s weak venture capital ecosystem and limited scaling opportunities, which push many of the roughly 600 German AI companies to the US.
Mieves acknowledged his proposal might be far-fetched but said it was intended to be radical to spark bigger thinking. “We here in Europe need to think much bigger,” he said, arguing that radical measures are necessary if Europe doesn’t want AI developments to be dominated by the US and China.
The EU is implementing its Artificial Intelligence Act as part of a broader Apply AI Strategy to promote “values-based regulation.” The act classifies AI applications into four risk levels—unacceptable, high, limited, and minimal—though some companies urge delaying the law out of concern it could stifle early innovation and hinder the EU’s ambition to become an “AI continent.”
European politicians and business leaders frequently call for “digital sovereignty,” meaning data and expertise should remain in the EU while protecting intellectual property. Arthur Mensch, CEO of French model-maker Mistral—the only European firm currently able to build large language models—stressed the need to keep US and Chinese companies out of critical European industry. He warned that relying on foreign AI would create dangerous dependencies and said Europe must be able to build models and operate the data centers that run them.
Despite broad awareness of the problem, Mieves said politicians often talk about digital sovereignty without concrete plans or ambitious measures. His proposal aimed to force a more assertive discussion about what Europe must do to gain independence in AI.
Anthropic was repeatedly contacted for comment but did not respond before publication.