Exclusive: Pentagon threatens to cut off Anthropic in AI safeguards dispute
News

Exclusive: Pentagon threatens to cut off Anthropic in AI safeguards dispute

Dave Lawler
2026.02.16
Ā·WebĀ·by ģ“ķ˜øėÆ¼
#AI#AI Safeguards#Anthropic#National Security#Pentagon

Key Points

  • 1The Pentagon is threatening to cut ties with Anthropic over the AI firm's refusal to lift usage limitations for military applications, specifically prohibiting mass surveillance of Americans and fully autonomous weaponry.
  • 2Anthropic maintains its "hard limits" despite a substantial contract and its models being the first to enter classified networks, while affirming its commitment to U.S. national security.
  • 3This escalating dispute, which the Pentagon says creates an "unworkable" situation, contrasts with other major AI labs like OpenAI and Google that have shown more willingness to remove restrictions for military use.

The paper details a significant dispute between the U.S. Pentagon and AI firm Anthropic regarding the permissible scope of AI model usage for military and intelligence applications. The core conflict centers on Anthropic's insistence on maintaining specific limitations on its models, particularly Claude, while the Pentagon demands unrestricted usage for "all lawful purposes" across critical domains including weapons development, intelligence collection, and battlefield operations.

Anthropic's "core methodology" for responsible AI deployment, as articulated in this context, involves strict prohibitions against two specific categories of use: the mass surveillance of American citizens and the deployment of fully autonomous weaponry. This stance reflects an "ideological" commitment, as described by a senior administration official, to mitigate potential catastrophic risks and ethical dilemmas associated with advanced AI. From Anthropic's perspective, these limitations are non-negotiable hard limits embedded within its Usage Policy, forming a foundational constraint on its technological offerings to national security clients. The company affirms that its discussions with the Department of War (DoW) have consistently focused on these specific policy questions rather than individual operational contexts.

Conversely, the Pentagon's "core methodology" advocates for a principle of maximum utility, asserting that any lawful application of AI should be permissible without vendor-imposed technological guardrails that differ from those applicable to ordinary users. The military's position is that the "gray area" surrounding Anthropic's stated prohibitions makes it operationally unfeasible to negotiate individual use-cases or risk unexpected blocking of applications by the AI model. This approach implies a preference for governance through legal and command structures rather than through intrinsic limitations imposed by the AI developer. The Pentagon is engaging with other leading AI labs—OpenAI's ChatGPT, Google's Gemini, and xAI's Grok—and reports that these companies have shown greater flexibility, with at least one having agreed to the "all lawful purposes" standard even for classified environments.

Tensions escalated following a recent operation to capture Venezuela's NicolƔs Maduro, where Claude, integrated through a partnership with Palantir, was potentially involved. A senior Pentagon official claims an Anthropic executive inquired about Claude's usage in the raid, implying disapproval due to "kinetic fire." Anthropic, however, vehemently denies discussing specific operational uses with the DoW or industry partners, maintaining that its conversations are strictly technical or pertain to its overarching Usage Policy.

Despite a $200 million contract and Claude being the first frontier AI model deployed on the Pentagon's classified networks, the military is now considering severing ties, albeit acknowledging the significant challenge of finding an "orderly replacement" given that "the other model companies are just behind" in specialized government applications. This underscores the technical superiority or specialized integration of Claude in certain critical national security applications. The internal dynamics at Anthropic, including CEO Dario Amodei's well-documented concerns about AI risks and internal engineer disquiet regarding military collaborations, further complicate the firm's position. Despite the friction, Anthropic maintains its commitment to supporting U.S. national security, highlighting its pioneering role in providing AI models for classified networks and customized national security solutions.