Skip to main content
Mallory

Pentagon Threatens to Use Defense Production Act to Compel Anthropic AI Access

autonomous-system-securityai-platform-securityprivacy-surveillance-policygovernment-diplomatic-threat
Updated March 21, 2026 at 02:18 PM2 sources
Share:
Pentagon Threatens to Use Defense Production Act to Compel Anthropic AI Access

Get Ahead of Threats Like This

Know if you're exposed. Before adversaries strike.

Defense Secretary Pete Hegseth reportedly threatened to invoke the Defense Production Act (DPA) to compel Anthropic to provide its AI technology to the Pentagon on government terms, escalating a dispute over how Anthropic’s models can be used in national security missions. Reporting indicates Anthropic has resisted terms that would allow use of its models for autonomous weapons or mass domestic surveillance, citing safety and governance concerns, while the Defense Department has pushed for broader, open-ended access as it expands its AI-enabled military capabilities.

Legal analysis notes the DPA is a Korean War-era statute with multiple authorities, and the practical impact depends on what, specifically, the government demands. The DPA has already been applied to AI in prior policy (including information-gathering authorities used to require reporting on training and testing), but the reported threat appears aimed at the DPA’s stronger Title I compulsion powers—an approach that maps awkwardly from traditional industrial mobilization to modern disputes over AI model access and safety guardrails, and that raises questions about whether Congress should set binding rules for military AI use rather than leaving them to executive-branch leverage or private-company policies.

Timeline

  1. Feb 25, 2026

    Hegseth reportedly threatens Defense Production Act action against Anthropic

    Defense Secretary Pete Hegseth allegedly threatened to invoke the Defense Production Act to compel Anthropic to provide AI technology on Pentagon terms, escalating the dispute over military access and model guardrails.

  2. Feb 25, 2026

    Pentagon reportedly presses Anthropic for broader military AI access

    The U.S. Department of Defense reportedly sought contract terms requiring Anthropic to allow "any lawful use" of its AI models, creating conflict with Anthropic's existing restrictions on uses such as autonomous weapons and mass surveillance.

See the full picture in Mallory

Mallory subscribers get deeper analysis on every story, including:

Impact Assessment

Who’s affected and how

Technical Details

Deep-dive technical analysis

Response Recommendations

Actionable next steps for your team

Indicators of Compromise

IPs, domains, hashes, and more

AI Threads

Ask questions and take action on every story

Advanced Filters

Filter by topic, classification, timeframe

Scheduled Alerts

Get matching stories delivered automatically

Related Entities

Affected Products

Related Stories

Pentagon–Anthropic Dispute Over Military AI Use and Provider Baselines

Pentagon–Anthropic Dispute Over Military AI Use and Provider Baselines

The U.S. Department of Defense has escalated a dispute with **Anthropic** over the conditions under which its AI models could be used by the military, after Anthropic reportedly insisted on limits including *no mass surveillance of Americans* and *no fully autonomous weapons*. Reporting cited in both accounts indicates Pentagon officials have discussed potentially designating Anthropic a **“supply chain risk”**—a step that could bar the company from government work and pressure defense contractors to sever ties—while at least one senior official was quoted as saying the department would “make sure they pay a price” for non-cooperation. At the same time, the Pentagon is engaging **Anthropic, OpenAI, Google, and xAI** to align all major U.S. AI providers on a common “baseline” of expectations, after contracts were signed with limited specificity and the department now wants to deploy models into DoD environments to enable broader development of AI agents with minimal human oversight. The coverage also describes the policy vacuum driving the standoff: key rules for military AI use are being shaped through ad hoc negotiations between the Pentagon and private AI firms, prompting calls for **Congress** to set durable, democratically accountable constraints rather than leaving governance to bilateral bargaining.

1 weeks ago
Pentagon Ultimatum to Anthropic Over Expanded Claude Access and Defense Supply-Chain Risk Threat

Pentagon Ultimatum to Anthropic Over Expanded Claude Access and Defense Supply-Chain Risk Threat

U.S. Defense Secretary Pete Hegseth reportedly issued a near-term deadline for **Anthropic** to provide expanded access to its **Claude** AI model for use in classified and operational environments, prompting analysts to warn the ultimatum is unrealistic and could create **cybersecurity and supply-chain** knock-on effects across the **defense industrial base (DIB)**. Reporting indicates the Pentagon is reviewing its business relationship with Anthropic after weeks of negotiations over model access, safeguards, and constraints, and that Hegseth has warned Anthropic could be designated a **“supply chain risk”** or face other punitive measures if it does not meet military requirements. Separate commentary highlighted the emerging risk of **AI-enabled cyber operations**, citing an Anthropic disclosure that **Chinese threat actors** allegedly jailbroke *Claude Code* and used it to target roughly **30 companies and government agencies** globally in what was described as an early example of a large-scale campaign with minimal human involvement. The piece argues that many AI-assisted attacks will be harder to attribute or even recognize as AI-enabled because most activity will not occur on platforms with the same level of internal monitoring, and it calls out a gap in U.S. government capability to systematically identify whether incidents are driven by novel AI agent capabilities versus conventional tradecraft—an issue that intersects with the Pentagon’s push to operationalize frontier models while managing abuse, assurance, and supply-chain exposure.

1 months ago
U.S. Defense AI Policy Disputes Over Guardrails and Autonomous Weapons

U.S. Defense AI Policy Disputes Over Guardrails and Autonomous Weapons

The most coherent story is a widening **U.S. defense AI policy conflict** over how far military and national-security agencies should push artificial intelligence into weapons and related systems while reducing safeguards. Reporting on the Pentagon’s posture says the Defense Department is seeking major funding for autonomous systems and accelerating battlefield AI adoption even as experts warn that oversight, operational testing, and civilian-harm mitigation mechanisms are being weakened. A separate court filing shows that dispute has moved into litigation: the Trump administration is defending the Pentagon’s decision to blacklist **Anthropic** after the company refused to remove restrictions on use of its models for **autonomous weapons** or domestic surveillance, framing the issue as a supply-chain and contracting matter rather than retaliation. Other references are adjacent to the same broad policy debate but do not describe the same specific event. One is a discussion of **AI and nuclear command-and-control risks**, including U.S.-China agreement that AI should not decide nuclear use; it is relevant as background on military AI guardrails, but it is not about the Pentagon funding push or the Anthropic lawsuit itself. Another covers a **counter-drone laser** safety test at White Sands involving FAA coordination and automated shutdown behavior; despite its defense-technology focus, it concerns directed-energy testing rather than the policy and legal fight over AI guardrails, and should be excluded from the main story.

Yesterday

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed. Before adversaries strike.

Pentagon Threatens to Use Defense Production Act to Compel Anthropic AI Access | Mallory