Prompt-injection chain enables code execution in Anthropic’s Git MCP server
Anthropic fixed three vulnerabilities in its official Git MCP server (mcp-server-git) that could be triggered via prompt injection and chained with other MCP tools to achieve remote code execution or destructive file operations. The issues were reported by agentic AI security firm Cyata, which demonstrated that attacker-controlled content an AI assistant might read (for example, a malicious README, poisoned issue text, or other untrusted context) could drive the LLM to invoke MCP tool calls with crafted arguments, enabling exploitation without the attacker having direct access to the victim host.
Cyata identified CVE-2025-68143 (unrestricted git_init), CVE-2025-68145 (path validation bypass), and CVE-2025-68144 (argument injection in git_diff). In combination—particularly when mcp-server-git is used alongside the Filesystem MCP server—the flaws could enable code execution, arbitrary file deletion/overwrite, and reading arbitrary files into the LLM context (with Cyata noting this does not inherently provide direct exfiltration). The vulnerabilities affected default deployments of mcp-server-git prior to version 2025.12.18; Anthropic was notified in June and shipped fixes in December, and there was no indication reported that the bugs were exploited in the wild.
Timeline
Jan 20, 2026
Media reports detail Anthropic's patched Git MCP server flaws
Multiple outlets reported that Anthropic had quietly fixed three critical vulnerabilities in its official Git MCP server and that there was no indication of in-the-wild exploitation. Coverage emphasized the risks of prompt injection and cross-tool chaining in agentic AI environments.
Jan 20, 2026
Cyata publicly discloses exploit chain affecting Anthropic Git MCP server
Cyata published research showing how the three vulnerabilities could be chained with filesystem-writing capabilities to enable arbitrary file overwrite and code execution via indirect prompt injection. The disclosure described abuse of Git clean/smudge filters and warned that MCP tool chaining creates broader ecosystem risk.
Dec 18, 2025
Anthropic completes fixes in mcp-server-git 2025.12.18
Anthropic committed the remaining fixes in December 2025, with reporting indicating default deployments prior to version 2025.12.18 were affected. Users were advised to upgrade to version 2025.12.18 or later.
Sep 25, 2025
Anthropic releases initial fix in mcp-server-git 2025.9.25
Anthropic addressed part of the reported issues in mcp-server-git version 2025.9.25, including removing the git_init capability referenced in later reporting. This was one of the remediation steps for the disclosed vulnerability set.
Jun 1, 2025
Cyata reports three mcp-server-git vulnerabilities to Anthropic
Cyata responsibly disclosed three flaws in Anthropic's official Git MCP server in June 2025. The issues involved repository path validation bypasses and unsafe argument handling that could be triggered through prompt injection.
See the full picture in Mallory
Mallory subscribers get deeper analysis on every story, including:
Who’s affected and how
Deep-dive technical analysis
Actionable next steps for your team
IPs, domains, hashes, and more
Ask questions and take action on every story
Filter by topic, classification, timeframe
Get matching stories delivered automatically
Related Entities
Vulnerabilities
Organizations
Affected Products
Sources
Related Stories

Anthropic MCP STDIO Design Flaw Enables RCE Across AI Tooling
Researchers at **OX Security** disclosed a design-level weakness in Anthropic’s **Model Context Protocol (MCP)** that can allow **arbitrary OS command execution** through unsafe `STDIO` transport behavior, creating a broad AI supply-chain risk. The flaw is reported to propagate through Anthropic’s official MCP SDKs into downstream tools and agents, with researchers linking it to at least **10 high- and critical-severity vulnerabilities** across widely used projects. Reported impacts include exposure of sensitive data such as API keys, chat histories, internal databases, and developer workstations, while estimates of exposure range from more than **7,000 publicly accessible servers** to as many as **200,000 servers** potentially at risk. Affected or cited projects include **LangFlow, Flowise, GPT Researcher, Upsonic, Windsurf, Claude Code, Cursor, Gemini-CLI, GitHub Copilot, LiteLLM,** and **LettaAI**. OX Security said it began reporting the issue to Anthropic in late 2025, but Anthropic reportedly treated the behavior as expected and responded by updating security guidance rather than changing the protocol architecture. Researchers described four main abuse paths: direct command injection, hardening bypass, zero-click or near-zero-click prompt injection in AI IDEs and coding assistants, and malicious MCP marketplace submissions that can execute commands on developer machines; they urged organizations to restrict public exposure, sandbox MCP-enabled services, treat external MCP configurations as untrusted, monitor MCP tool use, and install MCP servers only from verified sources.
2 days ago
Vulnerabilities in Anthropic Claude Code Enable Code Execution and API Key Exfiltration
Security researchers disclosed multiple vulnerabilities in **Anthropic’s Claude Code** AI coding assistant that could enable **arbitrary command execution** and **exfiltration of Anthropic API credentials** when developers clone/open a malicious repository. Check Point Research reported the issues abuse Claude Code configuration and initialization paths—particularly **project hooks** (e.g., untrusted `.claude/settings.json`), **Model Context Protocol (MCP) servers**, and **environment variables**—to trigger shell command execution and data theft. Anthropic’s advisory for **CVE-2026-21852** describes a project-load flow where a crafted repo can set `ANTHROPIC_BASE_URL` to an attacker-controlled endpoint, causing Claude Code to send API requests **before** the trust prompt is shown, potentially leaking the user’s API key. The disclosed issues include two high-severity code-injection paths (CVSS **8.7**) and one information-disclosure flaw (CVSS **5.3**): a consent-bypass/hook-based injection issue fixed in *Claude Code* **1.0.87** (Sept 2025), **CVE-2025-59536** fixed in **1.0.111** (Oct 2025), and **CVE-2026-21852** fixed in **2.0.65** (Jan 2026). Separate coverage framed Anthropic-related developments as market-moving, noting investor attention around Anthropic’s AI code-security tooling; however, the actionable security impact in this reporting is the risk that simply opening an attacker-controlled repository can lead to **RCE** and **credential leakage**, reinforcing the need to treat untrusted repos and tool initialization behaviors as a supply-chain and developer-workstation risk.
3 weeks ago
Critical Vulnerabilities in Anthropic Claude Code Enable RCE and API Key Theft via Malicious Repositories
**Check Point Research** disclosed multiple critical vulnerabilities in Anthropic’s *Claude Code* AI coding assistant that could allow **remote code execution** and **credential theft** when a developer clones and opens an **untrusted repository**. The reported attack path abuses repository-controlled configuration and automation features (including **Hooks**, **MCP servers**, and **environment variables**) to trigger hidden shell command execution and to exfiltrate **Anthropic API credentials**, potentially enabling a pivot from a developer workstation into broader enterprise environments where Claude-related workflows and shared resources are accessible. The issues include consent-bypass and command-execution weaknesses tracked under **CVE-2025-59536** (covering closely related flaws involving repository configuration executing commands without adequate user consent) and an API credential exposure issue tracked as **CVE-2026-21852**, which affected *Claude Code* versions prior to **2.0.65** and enabled API key theft via malicious project configurations. Anthropic has **patched** the vulnerabilities and advised users to update to the latest version, while indicating additional hardening measures are planned to reduce supply-chain risk from malicious commits and repository-level configuration abuse.
3 weeks ago