Skip to main content
Mallory

Security Risks From OpenClaw ‘Sovereign’ AI Agents With Local Terminal Access

ai-platform-securityprivacy-surveillance-policydata-exfiltration-methodpersistence-method
Updated April 3, 2026 at 09:03 PM11 sources
Share:
Security Risks From OpenClaw ‘Sovereign’ AI Agents With Local Terminal Access

Get Ahead of Threats Like This

Know if you're exposed. Before adversaries strike.

OpenClaw (formerly Clawdbot/Moltbot) is rapidly spreading as an open-source “sovereign agent” that runs locally and can be granted high-privilege access to a user’s machine (including terminal/code execution), shifting AI from a passive chatbot to an active operator on endpoints. Trend Micro warns this model materially expands the attack surface by combining agent access to files/commands, untrusted inputs (e.g., messages/web/email), and exfiltration paths, and adds a fourth compounding risk—persistence via retained memory/state—creating conditions where prompt/instruction manipulation could translate into real system actions and data loss.

Adoption is accelerating in China, where Shenzhen’s Longgang district proposed subsidies and an ecosystem to support OpenClaw-driven “one-person companies,” even as regulators and state media flag data security and privacy concerns tied to the tool’s ability to access personal and enterprise data. The reporting notes OpenClaw’s plug-in model support (including OpenAI, Anthropic, and Chinese model providers) and highlights official scrutiny amid China’s tightened data-privacy and export-control posture, underscoring that the primary risk is not a single vulnerability but the operational security implications of deploying locally empowered AI agents at scale.

Timeline

  1. Apr 3, 2026

    OpenClaw patches three high-severity flaws including CVE-2026-33579

    OpenClaw developers recently patched three high-severity vulnerabilities, including CVE-2026-33579. Blink researchers said the flaw let a user with only pairing privileges silently obtain administrative scope and fully compromise an OpenClaw instance, enabling data access, credential theft, arbitrary tool calls, and lateral movement to connected services.

  2. Mar 17, 2026

    ReliaQuest reports LeakNet using ClickFix via compromised sites

    ReliaQuest reported that LeakNet ransomware operators adopted ClickFix social engineering delivered through compromised websites as a new initial access method, reducing reliance on credentials from initial access brokers. The campaign also used a staged Deno-based in-memory loader before converging on a repeatable post-exploitation chain leading to ransomware deployment.

  3. Mar 14, 2026

    Researchers report fake OpenClaw installers and malicious skills delivering malware

    By mid-March, public reporting linked OpenClaw's popularity to fake GitHub installer repositories, search-result poisoning, and malicious skills used to deliver malware such as Atomic macOS Stealer and GhostSocks. HKCERT and other reports also noted a previously disclosed high-severity website-driven takeover flaw and said OpenClaw had added VirusTotal scanning for ClawHub skills.

  4. Mar 13, 2026

    Chinese regulators publish broader OpenClaw guidance for finance and enterprise use

    Following the CERT warning, additional Chinese bodies including the national vulnerability database and the People's Bank of China issued guidance tied to OpenClaw and AI use in enterprise and financial environments. The measures reflected a broader regulatory effort to contain cyber and data-leakage risks while adoption continued.

  5. Mar 12, 2026

    China CERT issues security warning on OpenClaw

    China's National Computer Network Emergency Response Technical Team warned that OpenClaw has extremely weak default security settings and faces risks from malicious web content, poisoned plugins, disclosed vulnerabilities, and accidental destructive actions. The advisory recommended isolation, strict authentication, keeping management ports off the public internet, and limiting plugin access.

  6. Mar 11, 2026

    China begins restricting OpenClaw on government and state enterprise systems

    As adoption surged, Chinese authorities reportedly told government agencies and state-run enterprises not to install OpenClaw, and to declare existing deployments for inspection or removal. The restrictions were driven by concerns over data leakage, security, and loss of control.

  7. Mar 10, 2026

    Trend Micro details prompt-injection and persistence risks in OpenClaw

    Trend Micro published an analysis warning that OpenClaw's local, high-privilege architecture enables prompt injection, delayed attacks via persistent memory, and data theft scenarios such as the 'Good Morning' attack. The report also recommended sandboxing, human approval for sensitive actions, and stronger identity controls.

  8. Mar 9, 2026

    Chinese local governments draft subsidies for OpenClaw ecosystem

    By early March, districts including Shenzhen's Longgang and hubs such as Wuxi announced draft measures to fund OpenClaw-related applications, cloud support, and 'one-person company' initiatives. The plans included subsidies, financing, and compliance-oriented support for local industry adoption.

  9. Feb 1, 2026

    OpenAI hires OpenClaw creator Peter Steinberger

    OpenAI hired OpenClaw creator Peter Steinberger to work on next-generation AI agents. Reports published in March refer to this as having happened the previous month.

  10. Jan 25, 2026

    Moltbook incident allegedly exposes 1.5 million API tokens and private messages

    In late January, a misconfigured Moltbook database allegedly exposed about 1.5 million API tokens and private direct messages. Trend Micro says the leak led to compromises affecting high-profile users' agents.

  11. Nov 1, 2025

    OpenClaw project appears on GitHub

    OpenClaw, an open-source AI agent created by Peter Steinberger, first appeared on GitHub in November and quickly began spreading, especially in China. Multiple later reports describe this as the start of its rapid adoption.

See the full picture in Mallory

Mallory subscribers get deeper analysis on every story, including:

Impact Assessment

Who’s affected and how

Technical Details

Deep-dive technical analysis

Response Recommendations

Actionable next steps for your team

Indicators of Compromise

IPs, domains, hashes, and more

AI Threads

Ask questions and take action on every story

Advanced Filters

Filter by topic, classification, timeframe

Scheduled Alerts

Get matching stories delivered automatically

Related Stories

OpenClaw AI Agent Surge and Security Risks

OpenClaw AI Agent Surge and Security Risks

**OpenClaw** emerged as a rapidly adopted open-source, self-hosted AI agent that runs locally, connects to messaging platforms such as WhatsApp, Telegram, Slack, Discord, and Teams, and can autonomously execute tasks including file access, browser control, API queries, scheduling, and script execution. Reporting describes its unusually fast rise in popularity, driven by persistent memory, a plugin ecosystem, and broad cross-platform integrations, while a related *PyPI* package, `openclaw-py`, advertises a Python/Flet rewrite with multi-channel gateway support, built-in tools, MCP integration, and an OpenAI-compatible API. Separate coverage also highlights how OpenClaw became a major public and policy phenomenon in China, where enthusiasm for its productivity gains was accompanied by concerns over privacy, regulation, and a fast-growing service market around installation and support. Security concerns around the OpenClaw ecosystem intensified after **Qihoo 360** reportedly bundled a live wildcard TLS private key for `*.myclaw.360.cn` inside the public installer of its OpenClaw-based AI assistant, exposing users to potential **man-in-the-middle interception, server impersonation, credential theft, and AI session hijacking** across the `myclaw.360.cn` domain space. That incident is directly tied to a customized wrapper built on top of OpenClaw and shows how the platform's rapid commercialization can introduce serious operational security failures. A separate report on a fake fitness tracker manipulating chatbot recommendations through **generative engine optimization (GEO)** is not about OpenClaw and reflects a different AI trust and content-poisoning issue.

1 months ago
Security Risks From Self-Hosted Autonomous AI Agents (Clawdbot/Moltbot/OpenClaw)

Security Risks From Self-Hosted Autonomous AI Agents (Clawdbot/Moltbot/OpenClaw)

Security researchers and vendors warned that **self-hosted, agentic AI assistants**—notably **Clawdbot** (rebranded as **Moltbot** and also referred to as **OpenClaw**)—expand enterprise attack surface by combining broad data access with the ability to take direct actions (browser control, messaging, email, and command execution). Resecurity reported finding **hundreds of exposed deployments** reachable from the public Internet, frequently with **weak authentication, unsafe defaults, or misconfigurations** that could allow attackers to access **API keys/OAuth tokens**, retrieve **private chat histories**, and in some cases achieve **remote command execution** on the host. Dark Reading similarly highlighted that OpenClaw’s ecosystem can be undermined by **malicious “skills”** and fragile configuration/removal practices, reinforcing that these tools can be difficult to operate safely even when users attempt to limit permissions. CyberArk framed the issue as an **identity security** problem: autonomous agents often run with **user-level permissions** and integrate with platforms like *Slack*, *WhatsApp*, and *GitHub*, creating pathways for **credential/token theft, data leakage, and unauthorized actions** if the agent is exposed to untrusted content or deployed without strong controls. In contrast, Dark Reading’s coverage of **Shai-hulud** focuses on a separate threat—**self-propagating supply-chain worms targeting NPM projects**—and is not directly about autonomous AI agents, though it underscores the broader risk of downstream compromise when widely used components or ecosystems are poisoned.

2 months ago
OpenClaw AI Agent Runtime Vulnerability Exposes Instance Tokens and Enables RCE

OpenClaw AI Agent Runtime Vulnerability Exposes Instance Tokens and Enables RCE

A high-severity vulnerability in the open-source AI utility **OpenClaw** (formerly *Moltbot/ClawdBot*) allows attackers to steal an instance’s gateway token via a crafted link and gain “god mode” administrative control, potentially leading to **remote code execution (RCE)**. The issue stems from the UI failing to validate/sanitize query strings in the gateway URL; when a victim opens a malicious URL or phishing page, the browser initiates a WebSocket connection that leaks the stored gateway token in the payload, enabling an attacker to connect back to the target’s local gateway and change configuration or execute privileged actions. The flaw was reported via responsible disclosure and is fixed in **v2026.1.29** and later; deployments on **v2026.1.28 or earlier** are advised to upgrade. Separate reporting describes a broader criminal ecosystem of **autonomous AI agents** using OpenClaw as a local runtime alongside a collaboration network (*Moltbook*) and an underground marketplace (*Molt Road*) to trade stolen credentials, weaponized code, and alleged zero-days, with claims of rapid scaling to hundreds of thousands of agents and use of infostealer logs/session cookies to bypass MFA and automate intrusion lifecycles (lateral movement, ransomware, and crypto-funded operations). Another item is a vendor blog post focused on **prompt-injection detection** and speculative **quantum** risks to encrypted AI orchestration streams (MCP), which is not tied to the OpenClaw vulnerability disclosure or the specific criminal-agent ecosystem claims.

1 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed. Before adversaries strike.