Wednesday, March 18, 2026
Wednesday, March 18, 2026
26.1 C
Africa

Microsoft Tells Federal Court That Pentagon’s Anthropic Blacklist Threatens the Entire Defense AI Ecosystem

Must read

Microsoft has told a federal court in San Francisco that the Pentagon’s decision to blacklist Anthropic as a supply-chain risk threatens not just one company but the entire ecosystem of defense and commercial technology built on artificial intelligence. The company’s amicus brief called for an urgent temporary restraining order against the designation and was accompanied by a separate filing from Amazon, Google, Apple, and OpenAI. The coordinated response from the technology industry underscores how broadly the Pentagon’s action is expected to reverberate.
Anthropic was designated a supply-chain risk after it refused to allow its Claude AI to be used for mass surveillance of American citizens or to power autonomous lethal weapons during a $200 million contract negotiation with the Pentagon. Defense Secretary Pete Hegseth formalized the designation following the collapse of talks, and the company’s government contracts began to be cancelled. Anthropic filed two lawsuits on the same day in California and Washington DC, challenging the designation as unconstitutional and without historical precedent.
Microsoft’s filing carries particular weight because the company directly integrates Anthropic’s AI into military systems it provides to the US government. As a partner in the Pentagon’s $9 billion Joint Warfighting Cloud Capability contract and holder of additional federal agreements worth several billion dollars more, Microsoft has a direct commercial and strategic stake in this case. The company publicly argued that the military’s access to the best AI and responsible AI governance were goals that needed to be pursued in partnership between government and industry.
Anthropic’s lawsuits argued that the supply-chain risk label, normally applied to firms with ties to foreign adversaries, was being misused as a political tool to punish a US company for its AI safety advocacy. The company’s court filings revealed that it does not currently believe Claude is safe or reliable enough for autonomous lethal operations, which it said was the genuine basis for the restrictions it sought. The Pentagon’s technology chief publicly declared that renegotiation was not an option.
Congressional Democrats are simultaneously pressing the Pentagon for answers about whether AI was used in a strike in Iran that reportedly killed over 175 civilians at a school, including questions about whether human review was applied before the strike. Their inquiries are running parallel to Anthropic’s lawsuits and adding legislative pressure to the already intense legal confrontation. Together, these developments are forcing a public reckoning with the governance of artificial intelligence in US military operations.

More articles

Popular article