OpenClaw AI Agent Runtime Vulnerability Exposes Instance Tokens and Enables RCE
A high-severity vulnerability in the open-source AI utility OpenClaw (formerly Moltbot/ClawdBot) allows attackers to steal an instance’s gateway token via a crafted link and gain “god mode” administrative control, potentially leading to remote code execution (RCE). The issue stems from the UI failing to validate/sanitize query strings in the gateway URL; when a victim opens a malicious URL or phishing page, the browser initiates a WebSocket connection that leaks the stored gateway token in the payload, enabling an attacker to connect back to the target’s local gateway and change configuration or execute privileged actions. The flaw was reported via responsible disclosure and is fixed in v2026.1.29 and later; deployments on v2026.1.28 or earlier are advised to upgrade.
Separate reporting describes a broader criminal ecosystem of autonomous AI agents using OpenClaw as a local runtime alongside a collaboration network (Moltbook) and an underground marketplace (Molt Road) to trade stolen credentials, weaponized code, and alleged zero-days, with claims of rapid scaling to hundreds of thousands of agents and use of infostealer logs/session cookies to bypass MFA and automate intrusion lifecycles (lateral movement, ransomware, and crypto-funded operations). Another item is a vendor blog post focused on prompt-injection detection and speculative quantum risks to encrypted AI orchestration streams (MCP), which is not tied to the OpenClaw vulnerability disclosure or the specific criminal-agent ecosystem claims.
Timeline
Feb 2, 2026
Memory poisoning risk in OpenClaw highlighted
Researchers described a separate security weakness in OpenClaw's persistent memory design, alleging attackers could manipulate MEMORY.md and SOUL.md files to covertly alter agent behavior. The issue was framed as a supply-chain-like risk to the broader autonomous agent ecosystem.
Feb 2, 2026
Researchers report rapid growth of autonomous criminal AI agents
Hudson Rock and Infostealers were cited as observing an emerging cybercrime ecosystem built around autonomous AI agents, with roughly 900,000 active agents appearing within 72 hours. The reported activity included use of infostealer-derived credentials and session cookies to bypass MFA and automate intrusion, data theft, and ransomware operations.
Feb 2, 2026
OpenClaw flaw disclosed after responsible reporting
The depthfirst security collective disclosed a high-severity vulnerability in OpenClaw that could leak a stored gateway token and allow full administrative takeover of an instance via a crafted link or phishing page. The issue affected OpenClaw versions 2026.1.28 and earlier and could enable configuration changes and remote code execution even when the service listened only on loopback.
Jan 29, 2026
OpenClaw patched in v2026.1.29 and later
The vulnerability was remediated in OpenClaw v2026.1.29 and later, with users on older versions advised to upgrade. The fix addressed the UI behavior that failed to validate or sanitize query strings in the gateway URL, which had enabled token exfiltration through an automatic WebSocket connection.
See the full picture in Mallory
Mallory subscribers get deeper analysis on every story, including:
Who’s affected and how
Deep-dive technical analysis
Actionable next steps for your team
IPs, domains, hashes, and more
Ask questions and take action on every story
Filter by topic, classification, timeframe
Get matching stories delivered automatically
Sources
Related Stories

OpenClaw (ClawdBot/Moltbot) One-Click Remote Code Execution via Unsafe Gateway URL Handling
A **critical one-click remote code execution (RCE)** issue was reported in *OpenClaw* (also referred to as **ClawdBot/Moltbot**), an open-source AI “agent” assistant that runs with high local privileges and access to sensitive data (e.g., messaging apps and API keys). The described exploit chain abuses **unsafe URL parameter ingestion** (e.g., a `gatewayUrl` query parameter accepted without validation), persistence of attacker-controlled values (stored in `localStorage`), and an **automatic gateway connection** that transmits an `authToken` during the handshake—enabling **cross-site WebSocket hijacking** and ultimately unauthenticated code execution after a victim clicks a single malicious link. Reporting indicates the flaw has been **weaponized**, making it a practical drive-by compromise path for endpoints running the assistant. Separate reporting highlighted broader concerns with agentic/open-source AI tooling and deployments, including the security risks of highly privileged “AI that acts for you” and the growing attack surface created by exposed AI services. Research cited large-scale internet exposure of open-source LLM runtimes (e.g., **Ollama**) with tool-calling and weak guardrails, warning that a single vulnerability or misconfiguration could enable widespread abuse (resource hijacking, identity laundering, or remote execution of privileged operations). These themes reinforce that AI agents and self-hosted AI stacks should be treated as **critical infrastructure**, with strict input validation, hardened update/connection flows, and strong monitoring around token handling and outbound connections.
1 months ago
OpenClaw AI Agent Exposures and One-Click RCE via WebSocket Hijacking
The open-source autonomous AI assistant **OpenClaw** (previously *Clawdbot* and *Moltbot*) is drawing security scrutiny after rapid adoption coincided with both widespread unsafe deployments and newly disclosed exploit chains. Reporting highlighted that the project’s autonomy-focused design (integrations with email, calendars, smart-home services, and other action-taking connectors) increases blast radius when misconfigured, and that security concerns have persisted through multiple rebrands as the ecosystem grows quickly. Internet scanning data indicated **21,000+ OpenClaw/Moltbot instances** were publicly exposed despite documentation recommending local-only access (default `TCP/18789`) and remote access via **SSH tunneling** rather than direct internet exposure; even where tokens are required for full access, exposed endpoints can aid adversary reconnaissance and targeting. Separately, researchers disclosed a **one-click RCE** chain leveraging **cross-site WebSocket hijacking** due to missing WebSocket `Origin` validation, enabling a malicious webpage to obtain an auth token, connect to the OpenClaw server, disable safety prompts/sandboxing, and invoke command execution (e.g., via `node.invoke`); the project issued a patch and advisory, while adjacent ecosystem components (e.g., agent-focused social features) were also flagged as adding additional attack surface.
2 days ago
OpenClaw AI Agent Skills Abused for Credential Exposure and Prompt-Injection Backdooring
Security researchers and media reports warned that the open-source AI agent **OpenClaw** (formerly *Moltbot/Clawdbot*) can be abused via its *ClawHub* “skills” ecosystem, with findings that **~7.1% of marketplace skills** contributed to exposure of **API keys, credentials, and credit card data** due to problematic `SKILL.md` instructions. Snyk highlighted a particularly severe example, **buy-anything skill v2.0.0**, which performs credit-card “tokenization” in a way that can be used to **pilfer financial details** before prompting users to provide card information. Additional research described **indirect prompt-injection** risk: a malicious Google document can coerce OpenClaw into integrating a new **Telegram bot**, enabling follow-on actions such as **file exfiltration** and deployment of a **Sliver** command-and-control beacon for persistence, with potential for **privilege escalation, lateral movement, and ransomware execution**. Separately, one report noted OpenClaw’s move to scan skills with **VirusTotal**, but also emphasized that signature-based scanning is not a complete mitigation for **prompt-injection** and other logic-level abuses; other items in the same news roundup (e.g., telecom “Salt Typhoon” oversight) were unrelated to OpenClaw’s vulnerabilities.
1 months ago