Microsoft’s decision to file a court brief supporting Anthropic in its battle against the Pentagon’s supply-chain risk designation has exposed deep and long-simmering tensions between the imperative to innovate in AI and the military’s desire to control how that technology is used. The brief was filed in a San Francisco federal court and called for a temporary restraining order. Amazon, Google, Apple, and OpenAI have also backed Anthropic through a joint filing, making the case a focal point for the technology industry’s collective concerns.
Anthropic’s confrontation with the Pentagon began when the company refused to sign a $200 million contract to deploy its AI on classified military systems without restrictions on its use for mass surveillance or autonomous lethal weapons. Defense Secretary Pete Hegseth labeled the company a supply-chain risk after negotiations broke down, and the Pentagon’s technology chief publicly ruled out any renegotiation. Anthropic filed two simultaneous lawsuits in California and Washington DC, arguing the designation was unconstitutional.
Microsoft’s involvement is grounded in its direct integration of Anthropic’s AI into federal military systems and its participation in the Pentagon’s $9 billion cloud computing contract. The company also holds additional agreements with government agencies spanning defense, intelligence, and civilian services. Microsoft publicly called for a collaborative framework in which government and industry jointly determine how AI should and should not be used in national security operations.
Anthropic’s court filings argued that the supply-chain risk designation was an unconstitutional act of retaliation for the company’s public advocacy of responsible AI development. The company disclosed that it does not currently believe Claude is safe or reliable enough for lethal autonomous operations, which it said was the real basis for its contract demands. Anthropic also argued that the designation, never before applied to a US company, represented a gross misuse of national security authority.
Congressional Democrats are pressing the Pentagon for answers about whether AI was used in a strike in Iran that reportedly killed more than 175 civilians at a school, raising questions about human oversight and the role of specific AI targeting systems. These legislative inquiries are adding another dimension to an already complex legal confrontation. Together, they are forcing a reckoning with the question of who has the authority to set ethical limits on AI in warfare: the companies that build it, or the government that deploys it.
Microsoft’s Court Support for Anthropic Exposes Deep Tensions Between AI Innovation and Pentagon Control
Date:
Picture Credit: Rawpixel (Public Domain)
