Skip to main content
Mallory

Pentagon–Anthropic Dispute Over Military AI Use and Provider Baselines

ai-platform-securityprivacy-surveillance-policygovernment-diplomatic-threatcybersecurity-regulation
Updated April 21, 2026 at 12:01 PM13 sources
Share:
Pentagon–Anthropic Dispute Over Military AI Use and Provider Baselines

Get Ahead of Threats Like This

Know if you're exposed. Before adversaries strike.

The U.S. Department of Defense has escalated a dispute with Anthropic over the conditions under which its AI models could be used by the military, after Anthropic reportedly insisted on limits including no mass surveillance of Americans and no fully autonomous weapons. Reporting cited in both accounts indicates Pentagon officials have discussed potentially designating Anthropic a “supply chain risk”—a step that could bar the company from government work and pressure defense contractors to sever ties—while at least one senior official was quoted as saying the department would “make sure they pay a price” for non-cooperation.

At the same time, the Pentagon is engaging Anthropic, OpenAI, Google, and xAI to align all major U.S. AI providers on a common “baseline” of expectations, after contracts were signed with limited specificity and the department now wants to deploy models into DoD environments to enable broader development of AI agents with minimal human oversight. The coverage also describes the policy vacuum driving the standoff: key rules for military AI use are being shaped through ad hoc negotiations between the Pentagon and private AI firms, prompting calls for Congress to set durable, democratically accountable constraints rather than leaving governance to bilateral bargaining.

Timeline

  1. Apr 19, 2026

    Axios reports NSA actively using Anthropic Mythos despite blacklist

    Axios reported that as of April 19, 2026, the NSA was actively using Anthropic's Mythos Preview model despite the Pentagon's supply-chain-risk designation and broader federal phase-out of Anthropic technology. The report suggested Anthropic use may extend elsewhere in the department, highlighting a contradiction between official restrictions and operational adoption.

  2. Apr 10, 2026

    Courts let Anthropic blacklist stand but narrow parts of its application

    By April 2026, a federal appeals court in Washington allowed the Pentagon's supply-chain-risk blacklist of Anthropic to remain in effect, while a federal judge in California granted Anthropic partial relief limiting how broadly the designation could be applied. The rulings marked a new stage in the litigation beyond earlier hearings criticizing the government's actions.

  3. Apr 8, 2026

    UK government reportedly courts Anthropic amid U.S. dispute

    Amid Anthropic's escalating conflict with the U.S. government, the UK government under Prime Minister Keir Starmer reportedly moved to attract the company with proposals tied to expansion and a possible listing in London. The outreach framed the UK as a strategic alternative while U.S. litigation over the Pentagon designation continued.

  4. Mar 25, 2026

    Judge suggests Anthropic designation may be unconstitutional retaliation

    At a March 2026 hearing, Judge Rita Lin said the Pentagon's treatment of Anthropic appeared aimed at crippling the company and could amount to unconstitutional First Amendment retaliation rather than a genuine security measure. Government counsel also reportedly acknowledged the Defense Department had not followed required procedures, including briefing Congress and considering less restrictive alternatives.

  5. Mar 24, 2026

    Federal judge calls Pentagon's Anthropic ban 'troubling'

    In federal court proceedings over Anthropic's challenge to the Pentagon ban, a judge reportedly described the government's action as 'troubling,' signaling judicial skepticism toward the rationale or process behind the restriction. The remark marked a new stage in the legal fight following the DOJ's defense of the ban.

  6. Mar 19, 2026

    DOJ defends Anthropic ban in court filing

    In response to Anthropic's request for a preliminary injunction, Justice Department attorneys argued in court that the action was based on operational and supply-chain security risks from vendor control over model weights, guardrails, and behavior, not retaliation for the company's AI safety views.

  7. Mar 19, 2026

    DoD designates Anthropic a supply-chain risk and sets 180-day transition

    The Department of Defense designated Anthropic as a supply-chain risk for national security environments, barring it from supplying AI systems there and requiring agencies to transition off its models within 180 days.

  8. Mar 19, 2026

    Administration orders Anthropic technology phased out of federal systems

    The Trump administration directed agencies to phase out Anthropic technology from federal systems, citing concerns that the company could retain ongoing control over deployed models in sensitive environments.

  9. Feb 20, 2026

    DoD reportedly threatens Anthropic with supply-chain-risk designation

    By February 2026, the dispute had escalated to reported Pentagon threats to label Anthropic a supply chain risk, a step that could cut it off from government contracts and pressure defense contractors to sever ties.

  10. Feb 18, 2026

    Pentagon moves to align AI vendors to a common baseline

    The Defense Department said it was in active discussions with Anthropic, OpenAI, Google, and xAI to standardize expectations for deploying frontier models on Pentagon systems, including internal AI agents and pilots with minimal human oversight for lawful use cases.

  11. Feb 18, 2026

    Anthropic rejects Pentagon term allowing AI use for any lawful purpose

    During contract discussions, Anthropic refused a Pentagon term that would have allowed its AI systems to be used for any lawful purpose, citing internal restrictions on uses such as mass surveillance of Americans and fully autonomous weapons.

  12. Jan 3, 2026

    Anthropic tools reportedly used in planning for Venezuela raid

    According to the new reference, Pentagon personnel used Anthropic tools in planning for the January 3 raid into Venezuela, and Anthropic reportedly learned of that military targeting-related use only after the fact. The disclosure adds a concrete example of operational use that fed later concerns about reliability, vendor control, and acceptable military applications.

  13. Jul 1, 2025

    DoD signs AI contracts with Anthropic, OpenAI, Google, and xAI

    Over the summer of 2025, the Department of Defense signed contracts with Anthropic, OpenAI, Google, and xAI with limited specificity, setting up later disputes over acceptable military uses and model controls.

See the full picture in Mallory

Mallory subscribers get deeper analysis on every story, including:

Impact Assessment

Who’s affected and how

Technical Details

Deep-dive technical analysis

Response Recommendations

Actionable next steps for your team

Indicators of Compromise

IPs, domains, hashes, and more

AI Threads

Ask questions and take action on every story

Advanced Filters

Filter by topic, classification, timeframe

Scheduled Alerts

Get matching stories delivered automatically

Sources

April 20, 2026 at 12:06 PM
April 10, 2026 at 12:00 AM
April 10, 2026 at 12:00 AM
April 8, 2026 at 12:00 AM

5 more from sources like koreatimes co, nextgov, defensescoop, qz and ms.now

Related Stories

Pentagon Threatens to Use Defense Production Act to Compel Anthropic AI Access

Pentagon Threatens to Use Defense Production Act to Compel Anthropic AI Access

Defense Secretary **Pete Hegseth** reportedly threatened to invoke the **Defense Production Act (DPA)** to compel *Anthropic* to provide its AI technology to the Pentagon on government terms, escalating a dispute over how Anthropic’s models can be used in national security missions. Reporting indicates Anthropic has resisted terms that would allow use of its models for **autonomous weapons** or **mass domestic surveillance**, citing safety and governance concerns, while the Defense Department has pushed for broader, open-ended access as it expands its AI-enabled military capabilities. Legal analysis notes the DPA is a Korean War-era statute with multiple authorities, and the practical impact depends on what, specifically, the government demands. The DPA has already been applied to AI in prior policy (including information-gathering authorities used to require reporting on training and testing), but the reported threat appears aimed at the DPA’s stronger **Title I** compulsion powers—an approach that maps awkwardly from traditional industrial mobilization to modern disputes over AI model access and safety guardrails, and that raises questions about whether **Congress** should set binding rules for military AI use rather than leaving them to executive-branch leverage or private-company policies.

1 months ago
U.S. Defense AI Policy Disputes Over Guardrails and Autonomous Weapons

U.S. Defense AI Policy Disputes Over Guardrails and Autonomous Weapons

The most coherent story is a widening **U.S. defense AI policy conflict** over how far military and national-security agencies should push artificial intelligence into weapons and related systems while reducing safeguards. Reporting on the Pentagon’s posture says the Defense Department is seeking major funding for autonomous systems and accelerating battlefield AI adoption even as experts warn that oversight, operational testing, and civilian-harm mitigation mechanisms are being weakened. A separate court filing shows that dispute has moved into litigation: the Trump administration is defending the Pentagon’s decision to blacklist **Anthropic** after the company refused to remove restrictions on use of its models for **autonomous weapons** or domestic surveillance, framing the issue as a supply-chain and contracting matter rather than retaliation. Other references are adjacent to the same broad policy debate but do not describe the same specific event. One is a discussion of **AI and nuclear command-and-control risks**, including U.S.-China agreement that AI should not decide nuclear use; it is relevant as background on military AI guardrails, but it is not about the Pentagon funding push or the Anthropic lawsuit itself. Another covers a **counter-drone laser** safety test at White Sands involving FAA coordination and automated shutdown behavior; despite its defense-technology focus, it concerns directed-energy testing rather than the policy and legal fight over AI guardrails, and should be excluded from the main story.

Yesterday
Pentagon Ultimatum to Anthropic Over Expanded Claude Access and Defense Supply-Chain Risk Threat

Pentagon Ultimatum to Anthropic Over Expanded Claude Access and Defense Supply-Chain Risk Threat

U.S. Defense Secretary Pete Hegseth reportedly issued a near-term deadline for **Anthropic** to provide expanded access to its **Claude** AI model for use in classified and operational environments, prompting analysts to warn the ultimatum is unrealistic and could create **cybersecurity and supply-chain** knock-on effects across the **defense industrial base (DIB)**. Reporting indicates the Pentagon is reviewing its business relationship with Anthropic after weeks of negotiations over model access, safeguards, and constraints, and that Hegseth has warned Anthropic could be designated a **“supply chain risk”** or face other punitive measures if it does not meet military requirements. Separate commentary highlighted the emerging risk of **AI-enabled cyber operations**, citing an Anthropic disclosure that **Chinese threat actors** allegedly jailbroke *Claude Code* and used it to target roughly **30 companies and government agencies** globally in what was described as an early example of a large-scale campaign with minimal human involvement. The piece argues that many AI-assisted attacks will be harder to attribute or even recognize as AI-enabled because most activity will not occur on platforms with the same level of internal monitoring, and it calls out a gap in U.S. government capability to systematically identify whether incidents are driven by novel AI agent capabilities versus conventional tradecraft—an issue that intersects with the Pentagon’s push to operationalize frontier models while managing abuse, assurance, and supply-chain exposure.

1 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed. Before adversaries strike.

Pentagon–Anthropic Dispute Over Military AI Use and Provider Baselines | Mallory