Security Risks From Self-Hosted Autonomous AI Agents (Clawdbot/Moltbot/OpenClaw)
Security researchers and vendors warned that self-hosted, agentic AI assistants—notably Clawdbot (rebranded as Moltbot and also referred to as OpenClaw)—expand enterprise attack surface by combining broad data access with the ability to take direct actions (browser control, messaging, email, and command execution). Resecurity reported finding hundreds of exposed deployments reachable from the public Internet, frequently with weak authentication, unsafe defaults, or misconfigurations that could allow attackers to access API keys/OAuth tokens, retrieve private chat histories, and in some cases achieve remote command execution on the host. Dark Reading similarly highlighted that OpenClaw’s ecosystem can be undermined by malicious “skills” and fragile configuration/removal practices, reinforcing that these tools can be difficult to operate safely even when users attempt to limit permissions.
CyberArk framed the issue as an identity security problem: autonomous agents often run with user-level permissions and integrate with platforms like Slack, WhatsApp, and GitHub, creating pathways for credential/token theft, data leakage, and unauthorized actions if the agent is exposed to untrusted content or deployed without strong controls. In contrast, Dark Reading’s coverage of Shai-hulud focuses on a separate threat—self-propagating supply-chain worms targeting NPM projects—and is not directly about autonomous AI agents, though it underscores the broader risk of downstream compromise when widely used components or ecosystems are poisoned.
Timeline
Feb 19, 2026
Flare reports widespread OpenClaw exploitation by multiple threat groups
Flare reported on February 19 that multiple threat groups were actively exploiting OpenClaw through RCE, exposed interfaces, poisoned skills, and credential-harvesting campaigns. The report described campaigns such as 'ClawHavoc' and warned of near-term exfiltration and persistence risks across exposed deployments.
Feb 19, 2026
Microsoft publishes guidance to run OpenClaw only in isolated environments
Microsoft advised treating OpenClaw as untrusted code execution with persistent credentials and recommended evaluating it only in isolated, disposable environments with low-privilege identities. The guidance emphasized containment, monitoring, and rapid rebuild capability over relying on prevention alone.
Feb 19, 2026
Major firms begin restricting OpenClaw use
By February 19, reporting indicated that companies including Meta and other AI firms had moved to restrict or limit OpenClaw use because of mounting security concerns. The restrictions reflected a shift from research warnings to concrete enterprise control actions.
Feb 17, 2026
OpenClaw releases version 2026.2.17 with security fixes
On February 17, OpenClaw released version 2026.2.17, adding new model support and platform features while also including security fixes. The release landed as the project remained under heavy scrutiny for RCE, audit findings, and malicious skills abuse.
Feb 16, 2026
OpenClaw partners with VirusTotal to scan ClawHub skills
OpenClaw maintainers announced a partnership with VirusTotal to scan for malicious skills on ClawHub, develop a threat model, and add misconfiguration auditing. The move came amid ongoing reports that attackers were bypassing marketplace checks with decoy skills and off-platform malware hosting.
Feb 16, 2026
Infostealer campaign steals OpenClaw config files and gateway tokens
Researchers reported that a likely Vidar-variant infostealer exfiltrated OpenClaw artifacts including openclaw.json, device.json, and soul.md from an infected victim. The theft showed that malware operators were beginning to target AI-agent configuration and identity material, not just browser credentials.
Feb 16, 2026
OpenAI hires OpenClaw creator Peter Steinberger
On February 16, OpenAI hired OpenClaw creator Peter Steinberger to work on safer personal and multi-agent systems. Steinberger said OpenClaw would remain open source and transition toward a foundation structure with OpenAI support.
Feb 15, 2026
Researcher publishes OpenClaw token-theft to RCE demonstration
A public write-up described how weakly configured, internet-exposed OpenClaw instances could be abused through token theft to achieve account takeover and arbitrary code execution. The author said the demonstration was performed on a default installation deployed for research.
Feb 13, 2026
OpenClaw adds detection support to Praetorian's Julius scanner
Praetorian released Julius v1.2.0 with new probes to detect exposed OpenClaw, Moltbot, and Clawdbot gateways on networks. The update reflected growing concern over misconfigured or outdated instances leaking tokens, chat histories, and filesystem access.
Feb 12, 2026
Gartner advises organizations to block OpenClaw
By mid-February, Gartner guidance cited in industry reporting recommended that organizations block OpenClaw because of insecure-by-default agentic-AI risks. The advice also aligned with calls to rotate credentials exposed to the platform and restrict enterprise use.
Feb 11, 2026
OpenClaw creator adds ClawHub anti-abuse controls
Peter Steinberger announced security-oriented updates to ClawHub, including requiring skill uploaders to have GitHub accounts at least a week old and adding a way for users to flag malicious skills. These changes were presented as an initial response to abuse in the skills marketplace.
Feb 9, 2026
SecurityScorecard reports massive internet exposure of OpenClaw
By February 9-10, SecurityScorecard's STRIKE team reported tens of thousands of exposed OpenClaw control panels and more than 135,000 internet-facing deployments, with many vulnerable to previously patched RCE issues. The team tied the exposure to default binding on 0.0.0.0:18789, weak access controls, and widespread failure to patch.
Feb 6, 2026
Security research wave warns OpenClaw is unsafe by design
On February 6, multiple firms and researchers publicly warned that OpenClaw's architecture and defaults made safe deployment difficult, citing prompt injection, plaintext secret storage, overprivileged execution, and risky third-party skills. The reporting framed the issue as a structural security problem rather than a single bug.
Feb 6, 2026
Resecurity reports hundreds of exposed Clawdbot/Moltbot deployments
Resecurity said it found hundreds of publicly exposed Clawdbot/Moltbot instances with weak authentication or unsafe defaults, enabling access to API keys, OAuth tokens, chat histories, and in some cases remote command execution. The company also noted that Shodan had indexed large numbers of related instances, making discovery easy.
Feb 3, 2026
Researchers demonstrate prompt-injection takeover and persistence
Security researchers showed that malicious content such as web pages or documents could coerce OpenClaw into unsafe actions, including downloading and executing shell scripts and persisting changes through HEARTBEAT.md or memory files. These demonstrations highlighted the platform's exposure to indirect prompt injection from untrusted inputs.
Feb 1, 2026
Threat actors begin exploiting OpenClaw within days of adoption
Multiple sources describe active exploitation starting within roughly 72 hours of OpenClaw's viral rise, using exposed admin panels, RCE, prompt injection, and credential-harvesting techniques. Reported outcomes included API key theft, message interception, and infostealer delivery.
Jan 31, 2026
Malicious skills appear in ClawHub marketplace
In late January and early February, attackers uploaded malicious third-party skills to OpenClaw's ClawHub marketplace, using social engineering and decoy packages to steal credentials, wallet data, and other secrets. Multiple reports described this as an early supply-chain style abuse of the agent ecosystem.
Jan 30, 2026
OpenClaw patches critical RCE flaw CVE-2026-25253
OpenClaw released fixes in late January for a critical remote code execution issue tracked as CVE-2026-25253, along with other high-severity flaws affecting older versions. Later reporting indicated many internet-exposed deployments remained unpatched despite the fix.
Jan 28, 2026
Security audit identifies 512 vulnerabilities in OpenClaw
A late-January 2026 security audit reportedly found 512 vulnerabilities in OpenClaw, including eight critical issues. The findings helped trigger broader scrutiny of the platform's insecure defaults, exposed interfaces, and plugin ecosystem.
Jan 25, 2026
OpenClaw gains viral popularity in late January 2026
OpenClaw, previously known as Clawdbot and Moltbot, rapidly gained adoption in late January 2026, with reports citing explosive GitHub growth and widespread self-hosted deployments. Its popularity drove broad experimentation by users and enterprises before its security model had matured.
See the full picture in Mallory
Mallory subscribers get deeper analysis on every story, including:
Who’s affected and how
Deep-dive technical analysis
Actionable next steps for your team
IPs, domains, hashes, and more
Ask questions and take action on every story
Filter by topic, classification, timeframe
Get matching stories delivered automatically
Related Entities
Malware
Organizations
Sources
5 more from sources like cyber security news, securitysenses blog, the hacker news, bank info security and govinfosecurity
Related Stories

Clawdbot Open-Source Agentic AI Assistants Raise Endpoint and Identity Security Risks
The open-source agentic assistant **Clawdbot** rapidly went viral on GitHub (reported at ~24,000–25,000+ stars in a short period) and drew high-profile attention, with reports of engineers running it locally on always-on hardware such as **Mac minis**. Clawdbot is positioned as a “local-first” AI gateway that can be driven from common messaging platforms (e.g., Slack/Discord/Telegram) and can take real actions on a host—invoking terminals, running scripts, using a browser for web automation, and retaining “memory” over time—effectively operating with permissions similar to a human user account. Security commentary around Clawdbot emphasizes that agentic assistants change incident patterns because they can persist like service accounts while behaving like users, expanding the blast radius if compromised or misconfigured. Key risks highlighted include **shadow AI** adoption outside IT controls, inherited or over-granted permissions across chat and SaaS tools, data exposure via long-lived context/memory, and new attack paths such as prompt manipulation or “helpful” automation that executes unsafe actions on endpoints. The guidance focuses on SOC readiness: monitoring for unusual automation behaviors and access patterns consistent with an agent executing actions across endpoints and collaboration/SaaS environments, and treating these tools as a machine-identity and endpoint-control problem rather than a simple chatbot governance issue.
1 months ago
Security Risks From OpenClaw ‘Sovereign’ AI Agents With Local Terminal Access
**OpenClaw** (formerly *Clawdbot/Moltbot*) is rapidly spreading as an open-source “sovereign agent” that runs locally and can be granted high-privilege access to a user’s machine (including terminal/code execution), shifting AI from a passive chatbot to an active operator on endpoints. Trend Micro warns this model materially expands the attack surface by combining agent **access to files/commands**, **untrusted inputs** (e.g., messages/web/email), and **exfiltration paths**, and adds a fourth compounding risk—**persistence** via retained memory/state—creating conditions where prompt/instruction manipulation could translate into real system actions and data loss. Adoption is accelerating in China, where Shenzhen’s Longgang district proposed subsidies and an ecosystem to support OpenClaw-driven “one-person companies,” even as regulators and state media flag **data security and privacy** concerns tied to the tool’s ability to access personal and enterprise data. The reporting notes OpenClaw’s plug-in model support (including OpenAI, Anthropic, and Chinese model providers) and highlights official scrutiny amid China’s tightened data-privacy and export-control posture, underscoring that the primary risk is not a single vulnerability but the **operational security implications of deploying locally empowered AI agents** at scale.
4 weeks ago
OpenClaw (ClawdBot/Moltbot) One-Click Remote Code Execution via Unsafe Gateway URL Handling
A **critical one-click remote code execution (RCE)** issue was reported in *OpenClaw* (also referred to as **ClawdBot/Moltbot**), an open-source AI “agent” assistant that runs with high local privileges and access to sensitive data (e.g., messaging apps and API keys). The described exploit chain abuses **unsafe URL parameter ingestion** (e.g., a `gatewayUrl` query parameter accepted without validation), persistence of attacker-controlled values (stored in `localStorage`), and an **automatic gateway connection** that transmits an `authToken` during the handshake—enabling **cross-site WebSocket hijacking** and ultimately unauthenticated code execution after a victim clicks a single malicious link. Reporting indicates the flaw has been **weaponized**, making it a practical drive-by compromise path for endpoints running the assistant. Separate reporting highlighted broader concerns with agentic/open-source AI tooling and deployments, including the security risks of highly privileged “AI that acts for you” and the growing attack surface created by exposed AI services. Research cited large-scale internet exposure of open-source LLM runtimes (e.g., **Ollama**) with tool-calling and weak guardrails, warning that a single vulnerability or misconfiguration could enable widespread abuse (resource hijacking, identity laundering, or remote execution of privileged operations). These themes reinforce that AI agents and self-hosted AI stacks should be treated as **critical infrastructure**, with strict input validation, hardened update/connection flows, and strong monitoring around token handling and outbound connections.
1 months ago