The U.S. Department of Defense has escalated its push to integrate commercial artificial intelligence into classified military systems, signing an agreement with Elon Musk’s xAI to deploy its Grok model. The move applies direct pressure on rival contractor Anthropic, which has refused Pentagon demands to remove ethical constraints on its Claude model for military use.
According to reports confirmed by Axios, the deal makes Grok the second AI system approved for the Pentagon’s most sensitive networks, joining Anthropic’s Claude. Claude has been the sole model available on these classified platforms through a partnership with Palantir Technologies, where it has been used for intelligence analysis and weapons development.
The agreement follows a scheduled meeting between Secretary of Defense Pete Hegseth and Anthropic CEO Dario Amodei. Sources indicate Hegseth will present an ultimatum: Anthropic must make Claude available for “all lawful purposes” without its current safeguards, or risk being designated a “supply chain risk.” Such a label could formally restrict the company’s access to government contracts.
Anthropic has consistently opposed Pentagon requests to lift restrictions that bar its technology from uses including mass surveillance of Americans and fully autonomous weapons systems. In contrast, xAI has reportedly agreed to the Pentagon’s terms, though the company has not publicly commented. Separately, Google is said to be nearing a similar classified-use deal for its Gemini model, while OpenAI remains in negotiations focused on safety technologies.
Pentgon officials acknowledge that removing Anthropic’s model could cause temporary disruptions. Claude was notably used in the operation to detain Venezuelan President Nicolás Maduro last month, marking the first known instance of an AI model playing a direct role in an active military raid.
Anthropic markets itself as a safety-focused developer, with CEO Dario Amodei publicly warning about existential risks from advanced AI, including autonomous systems. The recent, abrupt resignation of the company’s Safeguards Research Team lead, Mrinank Sharma, who cited deep concerns about AI’s dangers, has drawn additional attention to the firm’s ethical stance.
The Pentagon’s actions signal a strategic shift toward adopting commercial AI models with fewer operational constraints for critical defense applications, potentially reshaping the landscape of military technology partnerships.