Skip to main content
Mallory

Pentagon Ultimatum to Anthropic Over Expanded Claude Access and Defense Supply-Chain Risk Threat

ai-enabled-threat-activityai-platform-securitygovernment-diplomatic-threatstate-sponsored-espionagecybersecurity-regulation
Updated March 20, 2026 at 01:57 PM44 sources
Share:
Pentagon Ultimatum to Anthropic Over Expanded Claude Access and Defense Supply-Chain Risk Threat

Get Ahead of Threats Like This

Know if you're exposed. Before adversaries strike.

U.S. Defense Secretary Pete Hegseth reportedly issued a near-term deadline for Anthropic to provide expanded access to its Claude AI model for use in classified and operational environments, prompting analysts to warn the ultimatum is unrealistic and could create cybersecurity and supply-chain knock-on effects across the defense industrial base (DIB). Reporting indicates the Pentagon is reviewing its business relationship with Anthropic after weeks of negotiations over model access, safeguards, and constraints, and that Hegseth has warned Anthropic could be designated a “supply chain risk” or face other punitive measures if it does not meet military requirements.

Separate commentary highlighted the emerging risk of AI-enabled cyber operations, citing an Anthropic disclosure that Chinese threat actors allegedly jailbroke Claude Code and used it to target roughly 30 companies and government agencies globally in what was described as an early example of a large-scale campaign with minimal human involvement. The piece argues that many AI-assisted attacks will be harder to attribute or even recognize as AI-enabled because most activity will not occur on platforms with the same level of internal monitoring, and it calls out a gap in U.S. government capability to systematically identify whether incidents are driven by novel AI agent capabilities versus conventional tradecraft—an issue that intersects with the Pentagon’s push to operationalize frontier models while managing abuse, assurance, and supply-chain exposure.

Timeline

  1. Mar 12, 2026

    Anthropic seeks court stay of Pentagon risk designation

    Anthropic asked the U.S. Court of Appeals to temporarily block the supply-chain-risk designation while litigation proceeds. The company argued the government skipped required process and that the designation was already causing irreparable reputational and business harm.

  2. Mar 12, 2026

    Microsoft files amicus brief backing Anthropic

    Microsoft submitted an amicus brief supporting Anthropic and urging a halt to enforcement of the Pentagon's designation. It argued the move was an unprecedented and overly broad use of supply-chain-risk authorities that could force costly removals of embedded Anthropic technology.

  3. Mar 11, 2026

    Senators consider NDAA updates on military AI governance

    Lawmakers said they were exploring changes to the fiscal 2026 NDAA to better govern advanced AI use in military operations after the Anthropic dispute. The discussion focused on clarifying acceptable uses and human oversight requirements for defense AI systems.

  4. Mar 11, 2026

    INDOPACOM says Anthropic cutoff caused operational disruption

    U.S. Indo-Pacific Command officials said the sudden loss of a key AI model created disruption and reinforced the need for a model-neutral architecture. Their remarks showed the procurement fight was already affecting military AI operations and planning.

  5. Mar 9, 2026

    Anthropic petitions D.C. Circuit to review risk designation

    Anthropic separately filed a petition in the U.S. Court of Appeals for the D.C. Circuit seeking judicial review of the Pentagon's supply-chain-risk determination under FASCA. The company argued the action exceeded statutory authority and violated the First and Fifth Amendments and the APA.

  6. Mar 9, 2026

    Anthropic files district court lawsuit against U.S. government

    Anthropic sued in the U.S. District Court for the Northern District of California, alleging the government unlawfully retaliated against it for maintaining AI-safety restrictions. The complaint challenged the blacklist, contract cutoffs, and alleged due-process and First Amendment violations.

  7. Mar 5, 2026

    Tech industry groups and lawmakers rally behind Anthropic

    Major technology industry voices, including the Information Technology Industry Council, publicly warned that using supply-chain-risk authorities against Anthropic could set a damaging precedent across the defense industrial base. The dispute also drew congressional attention, including inquiries about AI use on Americans' personal data.

  8. Mar 5, 2026

    Coalition urges Congress to investigate Pentagon's actions

    A group of 35 former military officials, industry advocates, and private-sector leaders sent a March 5 letter asking Congress to investigate the Pentagon's treatment of Anthropic. The signatories argued the supply-chain-risk designation was retaliatory and called for statutory limits on AI use in surveillance and autonomous weapons.

  9. Mar 4, 2026

    House committee rejects amendment limiting AI blacklisting

    The House Financial Services Committee voted 16-25 against an amendment that would have barred agencies from blacklisting firms that refuse to deploy high-risk technology in harmful ways. The proposal was introduced in direct response to the Pentagon-Anthropic conflict.

  10. Mar 4, 2026

    Pentagon designates Anthropic a supply-chain risk

    The Department of Defense notified Anthropic that it was applying a supply-chain-risk determination to the company's products and services, effective immediately. The action barred defense contractors from using Anthropic technology in work connected to the U.S. military.

  11. Mar 2, 2026

    Lawfare hosts public discussion on Anthropic designation

    On March 2, Lawfare held a live discussion examining the Pentagon's supply-chain-risk designation of Anthropic, its legal implications, and possible consequences. The event reflected growing public and policy scrutiny of the government's action.

  12. Mar 2, 2026

    Federal agencies begin phasing out Anthropic products

    Departments including Treasury, State, and HHS confirmed they were stopping or winding down use of Anthropic tools in response to the White House order. Agencies began shifting toward alternatives such as OpenAI and Google offerings while planning for a six-month transition.

  13. Feb 28, 2026

    OpenAI announces Pentagon deal for classified network deployment

    After the Anthropic negotiations collapsed, OpenAI said it reached an agreement with the Pentagon to deploy its models on classified networks. OpenAI said the arrangement included limits on domestic mass surveillance and required human responsibility in uses involving lethal force.

  14. Feb 27, 2026

    GSA removes Anthropic from federal procurement channels

    Following Trump's directive, the General Services Administration said it would remove Anthropic from key procurement vehicles including the Multiple Award Schedule and USAI.gov, and terminate Anthropic's OneGov deal. This operationalized the federal phaseout across civilian buying channels.

  15. Feb 27, 2026

    Trump orders federal agencies to stop using Anthropic technology

    President Donald Trump directed all federal agencies to immediately cease using Anthropic technology and begin a six-month phaseout, including in classified environments. The order escalated the Pentagon dispute into a government-wide procurement action.

  16. Feb 27, 2026

    Anthropic publicly refuses to remove Claude guardrails

    Anthropic said it would not, 'in good conscience,' allow Claude to be used for mass domestic surveillance or fully autonomous weapons without human involvement. The company said it remained open to defense work if those two safeguards were preserved.

  17. Feb 27, 2026

    Hegseth sets Feb. 27 deadline and threatens punitive action

    Defense Secretary Pete Hegseth reportedly gave Anthropic a deadline of February 27, 2026 to accept broader Pentagon terms for Claude's use. He also threatened measures including invoking the Defense Production Act and labeling Anthropic a supply-chain risk if talks failed.

  18. Feb 27, 2026

    Pentagon asks contractors to assess dependence on Anthropic

    Axios-reported details said the Pentagon asked two major defense contractors to determine how dependent they were on Anthropic's Claude. The move was described as an early step in evaluating whether Anthropic could be treated as a supply-chain risk.

  19. Feb 1, 2026

    Pentagon and Anthropic enter weeks of negotiations over Claude use

    The Department of Defense and Anthropic spent weeks negotiating expanded access to Claude for classified and operational environments. The core dispute was Anthropic's insistence on preserving limits against mass surveillance of Americans and fully autonomous weapons.

  20. Jan 1, 2026

    Hegseth memo pushes 'any lawful use' in Defense AI procurements

    A January AI strategy memo from Defense Secretary Pete Hegseth reportedly required Defense AI procurements to include 'any lawful use' language and favored models free of vendor usage-policy constraints. This set the policy backdrop for the later clash with Anthropic over contractual guardrails.

See the full picture in Mallory

Mallory subscribers get deeper analysis on every story, including:

Impact Assessment

Who’s affected and how

Technical Details

Deep-dive technical analysis

Response Recommendations

Actionable next steps for your team

Indicators of Compromise

IPs, domains, hashes, and more

AI Threads

Ask questions and take action on every story

Advanced Filters

Filter by topic, classification, timeframe

Scheduled Alerts

Get matching stories delivered automatically

Sources

March 12, 2026 at 12:00 AM
March 12, 2026 at 12:00 AM
March 12, 2026 at 12:00 AM

5 more from sources like nextgov, cyber security news, lawfare media and govinfosecurity

Related Stories

Anthropic Faces Scrutiny Over AI Safety Commitments and National Security Use Cases

Anthropic Faces Scrutiny Over AI Safety Commitments and National Security Use Cases

Anthropic drew heightened scrutiny from security and policy communities over how its **AI safety and governance commitments** are evolving and how its models are being positioned for sensitive use cases. A Help Net Security analysis reported that Anthropic’s updated *Responsible Scaling Policy (RSP) 3.0* represents a structural shift from maintaining absolute risk below fixed thresholds to a more **relative, competitor-dependent** posture—implying Anthropic may be less willing to pause or constrain capability development if peers do not. The same reporting also noted Anthropic’s launch of *Claude Code Security* as a move that unsettled parts of the cybersecurity market and raised questions about trust and vendor assurances in security-adjacent AI offerings. In parallel, Lawfare reported the Pentagon labeled Anthropic a **national security risk** tied to usage restrictions Anthropic imposed on a military contract, while also describing reporting that the U.S. military used Anthropic’s *Claude* model in initiating operations in Iran less than a day later—highlighting the tension between policy concerns and rapid military adoption of frontier AI. Separately, Anthropic announced the creation of the **Anthropic Institute**, a research unit intended to study long-term societal impacts and risks from advanced AI; the company stated its models can already discover severe cybersecurity vulnerabilities and argued that governments and industry will face near-term governance challenges as capabilities accelerate.

1 months ago
Pentagon–Anthropic Dispute Over Military AI Use and Provider Baselines

Pentagon–Anthropic Dispute Over Military AI Use and Provider Baselines

The U.S. Department of Defense has escalated a dispute with **Anthropic** over the conditions under which its AI models could be used by the military, after Anthropic reportedly insisted on limits including *no mass surveillance of Americans* and *no fully autonomous weapons*. Reporting cited in both accounts indicates Pentagon officials have discussed potentially designating Anthropic a **“supply chain risk”**—a step that could bar the company from government work and pressure defense contractors to sever ties—while at least one senior official was quoted as saying the department would “make sure they pay a price” for non-cooperation. At the same time, the Pentagon is engaging **Anthropic, OpenAI, Google, and xAI** to align all major U.S. AI providers on a common “baseline” of expectations, after contracts were signed with limited specificity and the department now wants to deploy models into DoD environments to enable broader development of AI agents with minimal human oversight. The coverage also describes the policy vacuum driving the standoff: key rules for military AI use are being shaped through ad hoc negotiations between the Pentagon and private AI firms, prompting calls for **Congress** to set durable, democratically accountable constraints rather than leaving governance to bilateral bargaining.

2 weeks ago
Pentagon Threatens to Use Defense Production Act to Compel Anthropic AI Access

Pentagon Threatens to Use Defense Production Act to Compel Anthropic AI Access

Defense Secretary **Pete Hegseth** reportedly threatened to invoke the **Defense Production Act (DPA)** to compel *Anthropic* to provide its AI technology to the Pentagon on government terms, escalating a dispute over how Anthropic’s models can be used in national security missions. Reporting indicates Anthropic has resisted terms that would allow use of its models for **autonomous weapons** or **mass domestic surveillance**, citing safety and governance concerns, while the Defense Department has pushed for broader, open-ended access as it expands its AI-enabled military capabilities. Legal analysis notes the DPA is a Korean War-era statute with multiple authorities, and the practical impact depends on what, specifically, the government demands. The DPA has already been applied to AI in prior policy (including information-gathering authorities used to require reporting on training and testing), but the reported threat appears aimed at the DPA’s stronger **Title I** compulsion powers—an approach that maps awkwardly from traditional industrial mobilization to modern disputes over AI model access and safety guardrails, and that raises questions about whether **Congress** should set binding rules for military AI use rather than leaving them to executive-branch leverage or private-company policies.

1 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed. Before adversaries strike.

Pentagon Ultimatum to Anthropic Over Expanded Claude Access and Defense Supply-Chain Risk Threat | Mallory