A US federal judge has temporarily blocked the Pentagon from designating AI firm Anthropic as a national security risk and from cutting its government contracts, ruling that the actions likely violated the law and constituted retaliation for the company’s protected speech.
The dispute centers on Anthropic’s refusal to permit military use of its Claude large language models. The Department of Defense sought an agreement allowing “all lawful uses,” but Anthropic resisted, citing ethical concerns over potential applications in mass domestic surveillance or fully autonomous weapons. Following the breakdown in negotiations, defense officials imposed the national security risk designation and ordered contractors to cease using Claude.
In her ruling, US District Judge Rita Lin called the designation a “classic” case of First Amendment retaliation. She noted that such a label is typically reserved for foreign intelligence agencies or terrorists, not domestic companies exercising free speech. “Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary… for expressing disagreement with the government,” Lin wrote.
Anthropic had sued the Trump administration earlier this week, allegations the move was “unprecedented and unlawful” and a direct reaction to its public stance on ethical AI use. The company argued the government cannot use its power to punish protected speech.
The conflict escalated after President Donald Trump directed all federal agencies to stop using Anthropic’s technology, giving the military six months to phase out existing systems. Secretary of War Pete Hegseth criticized the company’s position, announcing a shift toward a “more patriotic
