Microsoft has sounded the alarm in a San Francisco federal court, warning that the Pentagon’s controversial designation of Anthropic as a supply-chain risk could cause widespread disruption across the technology and defense sectors. The company filed an amicus brief supporting Anthropic’s request for a temporary restraining order, emphasizing that the case affects not just Anthropic but the entire network of companies that rely on its AI products. The filing has been joined by Google, Amazon, Apple, and OpenAI, forming an unprecedented coalition of support.
Anthropic’s dispute with the Pentagon began when negotiations over a $200 million contract to deploy AI on classified military systems fell apart. The AI company had insisted on restrictions preventing use of its technology for mass domestic surveillance or autonomous lethal weapons, which the Defense Department rejected. Defense Secretary Pete Hegseth responded by branding Anthropic a supply-chain risk and the Pentagon’s top technology officer later stated publicly that renegotiation was completely off the table.
Microsoft’s legal filing carries enormous weight due to its entrenched position as one of the Pentagon’s top technology suppliers. The company is a partner in the $9 billion Joint Warfighting Cloud Capability contract and has additional agreements spanning defense, intelligence, and civilian government agencies. Microsoft said publicly that the country’s military needs access to the best available technology and that AI governance must be shaped in partnership with both the private sector and the public.
Anthropic launched two legal challenges simultaneously, one in a California federal court and one in the DC circuit court of appeals, arguing the supply-chain risk designation was constitutionally impermissible. The company claimed the label, traditionally reserved for firms linked to foreign adversaries like China, was being applied as ideological retaliation. Anthropic also stated in its filings that it does not have sufficient confidence in Claude’s reliability and safety in autonomous warfare contexts, which it said was the basis for its contract demands.
As this legal drama unfolds, Congress is independently investigating the use of AI in recent US military operations in Iran. Lawmakers have specifically asked whether AI tools were used to select the target of a strike that reportedly killed more than 175 civilians at an elementary school. The timing of these parallel inquiries has intensified public scrutiny of how the US military is deploying artificial intelligence and whether adequate human oversight is in place.