Skip to main content
Mallory

Legal Disputes Over AI Companies’ Access and Government Restrictions

privacy-surveillance-policyai-platform-securitytrade-export-controlgovernment-diplomatic-threat
Updated April 29, 2026 at 07:01 PM12 sources
Share:
Legal Disputes Over AI Companies’ Access and Government Restrictions

Get Ahead of Threats Like This

Know if you're exposed. Before adversaries strike.

The references do not describe a single cybersecurity incident or vulnerability. One article examines Anthropic’s lawsuit challenging a U.S. government decision to designate the company a national-security supply-chain risk and bar federal agencies and contractors from using its AI models after Anthropic refused to relax restrictions related to lethal autonomous warfare and mass surveillance. Another covers a separate court fight in which Perplexity AI won a temporary appellate stay of an injunction that would have blocked its Comet shopping agent from accessing Amazon accounts, after a judge found Amazon was likely to succeed on claims under the Computer Fraud and Abuse Act and California’s computer access law.

A third article is not about either dispute; it discusses Pentagon policy toward universities, arguing that Defense Secretary Pete Hegseth’s move to cut Defense Department academic ties with Harvard and other institutions could weaken U.S. competitiveness and force readiness. Because the materials concern distinct policy and legal controversies rather than one coherent cyber event, the set should not be treated as a unified incident report. It is also not fluff, since the content involves substantive legal and security-policy issues, including alleged unauthorized access to protected accounts and the use of national-security authorities against an AI vendor.

Timeline

  1. Apr 21, 2026

    OMB weighs restoring Anthropic access for civilian federal systems

    By 2026-04-21, U.S. officials were reportedly discussing allowing federal civilian agencies renewed access to Anthropic technology while keeping the company excluded from military procurement. The move followed the court’s injunction against the civilian-domain designation and highlighted ongoing uncertainty over federal AI supply-chain governance.

  2. Apr 9, 2026

    Appeals court refuses to block Anthropic blacklist pending appeal

    On 2026-04-09, a court denied Anthropic’s motion for a stay in its dispute with the government over continued restrictions on its AI technology during litigation. The ruling said the balance of equities favored the government because of national security and active military conflict concerns, while noting Anthropic had raised substantial issues warranting expedited review.

  3. Mar 27, 2026

    Judge blocks Pentagon blacklist of Anthropic

    On 2026-03-27, U.S. District Judge Rita Lin granted Anthropic a preliminary injunction temporarily blocking the Trump administration from designating it a supply-chain risk and cutting off federal contracting access. The judge said the measures were likely unlawful and appeared punitive rather than grounded in legitimate national security concerns, while delaying the order for one week to allow the government to seek a stay.

  4. Mar 19, 2026

    DOJ defends Anthropic ban in San Francisco court filing

    In a federal court filing, Justice Department attorneys argued Anthropic’s ongoing control over model weights, guardrails, and tuning could let it alter or disable mission-critical defense AI systems, and said agencies must phase out its technology within 180 days.

  5. Mar 19, 2026

    Anthropic seeks preliminary injunction against federal ban

    Anthropic asked a federal court for a preliminary injunction against the Department of Defense, the White House, and other agencies, challenging the supply-chain-risk designation and the government-wide order to cease use of its technology.

  6. Mar 19, 2026

    Trump administration labels Anthropic a supply-chain risk

    The administration designated Anthropic a national security supply-chain risk and directed federal agencies to stop using its products after the company refused to remove Claude restrictions related to lethal autonomous warfare without human oversight and mass surveillance of Americans.

  7. Mar 17, 2026

    Appeals court temporarily pauses block on Perplexity agent

    A federal appeals court temporarily stayed the district court order that would have blocked Perplexity from using its AI-powered shopping agent on Amazon while the case proceeds.

  8. Mar 9, 2026

    Judge grants Amazon preliminary injunction against Perplexity

    On March 9, U.S. District Judge Maxine Chesney granted Amazon a preliminary injunction, finding the company was likely to succeed on claims under the Computer Fraud and Abuse Act and California’s Comprehensive Computer Data Access and Fraud Act.

  9. Nov 1, 2025

    Amazon sues Perplexity over Comet shopping agent access

    Amazon filed suit against Perplexity in November, alleging the Comet browser and its AI shopping agent accessed password-protected customer account areas without authorization, disguised automated activity as human browsing, and ignored repeated demands to stop.

See the full picture in Mallory

Mallory subscribers get deeper analysis on every story, including:

Impact Assessment

Who’s affected and how

Technical Details

Deep-dive technical analysis

Response Recommendations

Actionable next steps for your team

Indicators of Compromise

IPs, domains, hashes, and more

AI Threads

Ask questions and take action on every story

Advanced Filters

Filter by topic, classification, timeframe

Scheduled Alerts

Get matching stories delivered automatically

Sources

April 9, 2026 at 12:00 AM

5 more from sources like arstechnica, nextgov, cnet, theguardian com and govinfosecurity

Related Stories

U.S. Defense AI Policy Disputes Over Guardrails and Autonomous Weapons

U.S. Defense AI Policy Disputes Over Guardrails and Autonomous Weapons

The most coherent story is a widening **U.S. defense AI policy conflict** over how far military and national-security agencies should push artificial intelligence into weapons and related systems while reducing safeguards. Reporting on the Pentagon’s posture says the Defense Department is seeking major funding for autonomous systems and accelerating battlefield AI adoption even as experts warn that oversight, operational testing, and civilian-harm mitigation mechanisms are being weakened. A separate court filing shows that dispute has moved into litigation: the Trump administration is defending the Pentagon’s decision to blacklist **Anthropic** after the company refused to remove restrictions on use of its models for **autonomous weapons** or domestic surveillance, framing the issue as a supply-chain and contracting matter rather than retaliation. Other references are adjacent to the same broad policy debate but do not describe the same specific event. One is a discussion of **AI and nuclear command-and-control risks**, including U.S.-China agreement that AI should not decide nuclear use; it is relevant as background on military AI guardrails, but it is not about the Pentagon funding push or the Anthropic lawsuit itself. Another covers a **counter-drone laser** safety test at White Sands involving FAA coordination and automated shutdown behavior; despite its defense-technology focus, it concerns directed-energy testing rather than the policy and legal fight over AI guardrails, and should be excluded from the main story.

Yesterday
Policy and industry debate over AI safety, governance, and data protection

Policy and industry debate over AI safety, governance, and data protection

U.S. policymakers and industry leaders are escalating scrutiny of **AI safety and data protection**, with a particular focus on sensitive data flows and the adequacy of existing guardrails. In a Senate HELP Committee hearing, lawmakers questioned whether federal guardrails are needed to protect Americans’ healthcare data voluntarily uploaded to AI-enabled apps and wearables that may fall outside HIPAA coverage, raising concerns about liability, downstream data use, and integration into medical records; HHS noted it is collecting public input via a request for information on safe and effective AI deployment in healthcare. Separately, commentary on AI governance and safety argues competitive pressure among frontier AI labs can erode safety practices and that clearer antitrust guidance could enable cross-industry collaboration on safety standards without triggering enforcement risk. Tensions over AI “red lines” in national security use also became more public, as **Anthropic** CEO Dario Amodei accused **OpenAI** of misleading messaging about defense work amid reports that Anthropic’s DoD talks faltered over restrictions related to mass domestic surveillance and autonomous weapons, while OpenAI described its agreement as permitting “all lawful purposes” alongside stated prohibitions. Broader, non-incident reporting highlighted enterprise investment to support *agentic AI* (with many data leaders citing governance lagging AI adoption) and general concerns about deepfakes, opaque models, and societal risk; however, several items in the set were primarily newsletters, vendor/industry promotion, or general-interest AI commentary rather than a single, discrete cybersecurity incident or vulnerability disclosure.

1 months ago
Enterprise AI and Security Coverage Roundup (OpenAI–Pentagon Deal, AI Security Tools, and Governance Commentary)

Enterprise AI and Security Coverage Roundup (OpenAI–Pentagon Deal, AI Security Tools, and Governance Commentary)

Citizen Lab highlighted concerns about **OpenAI’s Pentagon contract**, noting expert skepticism that bulk user-data collection can be effectively ruled out and warning that feeding commercially available personal data into *opaque AI systems* can amplify harm through errors, bias, and weak accountability. Separately, CSO Online reported on OpenAI’s security-related initiatives, including a claim that **Codex Security** identified **11,000 “high-impact” bugs** in a month and a report that OpenAI plans to **acquire Promptfoo** to strengthen **AI agent security testing**. Most other items in the set are **opinion/feature or promotional content** rather than incident-driven threat intelligence: CIO and CSO Online ran general enterprise AI and security management pieces (e.g., “shadow AI” governance, identity decisioning, OT/IoT/zero trust challenges, cloud security culture/process issues, and pen-test automation lessons learned), while Red Canary published an **RSAC 2026 session guide**. One CSO Online headline referenced a **critical HPE Aruba CX switch flaw** enabling admin control without credentials, but the provided text does not include details sufficient to confirm it as the same story as the OpenAI items and it appears as a sidebar link rather than the primary subject of the referenced pages.

1 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed. Before adversaries strike.