Skip to main content
Mallory

Moltbot AI Assistant Adoption Drives Security Risks and Malware Impersonation

extension-plugin-hijackai-platform-securityremote-access-implantinternet-exposed-serviceleaked-secret-api-key
Updated March 21, 2026 at 02:43 PM7 sources
Share:
Moltbot AI Assistant Adoption Drives Security Risks and Malware Impersonation

Get Ahead of Threats Like This

Know if you're exposed. Before adversaries strike.

The open-source agentic AI assistant Moltbot (formerly Clawdbot) rapidly gained developer adoption, but security researchers and media reporting warned that its “always-on” design and deep integrations can require broad access to sensitive accounts and credentials across messaging platforms and services. Reported risks include insecure deployments and misconfigurations that leave instances exposed to the internet, weak secret-handling practices (including plaintext storage on local filesystems), and the broader challenge that agentic tools can bypass traditional security boundaries unless operators implement strong controls such as least-privilege access, monitoring, encryption-at-rest, and sandboxing/containerization.

Attackers also capitalized on Moltbot’s popularity by publishing a fake Moltbot/Clawdbot VS Code extension on Microsoft’s official Marketplace, despite Moltbot not having an official extension. The malicious extension (clawdbot.clawdbot-agent) was designed to run on IDE launch, fetch config.json from clawdbot.getintwopc[.]site, execute a dropped binary (Code.exe), and install a legitimate remote access tool (ConnectWise ScreenConnect) that connected to meeting.bulletmailer[.]net:8041 for persistent attacker access; Microsoft removed the extension after it was reported.

Timeline

  1. Jan 28, 2026

    Security firms warn enterprises about insecure Moltbot deployments and prompt injection

    By late January 2026, multiple security vendors and researchers publicly warned that Moltbot's high-privilege design, plaintext secret storage, lack of sandboxing, and prompt-injection exposure created serious risks for enterprise users. Recommended mitigations included isolating deployments, restricting network exposure, encrypting secrets at rest, and treating third-party skills and extensions as untrusted code.

  2. Jan 28, 2026

    Microsoft removes the fake Moltbot VS Code extension after discovery

    After researchers identified the trojanized extension, Microsoft removed it from the Visual Studio Code Marketplace. Analysis found it could install a legitimate ConnectWise ScreenConnect client and used multiple fallback delivery methods, including Rust-based DLL sideloading and hard-coded URLs.

  3. Jan 28, 2026

    Researchers identify exposed Moltbot admin interfaces leaking sensitive data

    Researchers reported that hundreds of Moltbot/Clawdbot Control instances were exposed online due to misconfigurations such as publicly accessible reverse proxies. Some instances reportedly allowed unauthenticated access and exposed API keys, OAuth tokens, chat histories, credentials, and in some cases command execution or root-level access.

  4. Jan 28, 2026

    Researchers demonstrate supply-chain abuse via Moltbot skills registry

    Security researchers showed that Moltbot's official skills ecosystem could be abused by publishing a malicious or backdoored skill and artificially boosting its popularity to drive downloads. The demonstration highlighted the potential for command execution and exfiltration of secrets such as SSH keys and cloud credentials.

  5. Jan 28, 2026

    Clawdbot rebrands to Moltbot amid rapid adoption

    The open-source autonomous AI assistant formerly known as Clawdbot was rebranded as Moltbot as its popularity surged. Multiple reports describe the rebrand as occurring before the late-January 2026 wave of security scrutiny.

  6. Jan 27, 2026

    Malicious 'ClawdBot Agent' extension is published to VS Code Marketplace

    A fake Visual Studio Code extension impersonating the Moltbot/Clawdbot coding assistant, 'ClawdBot Agent - AI Coding Assistant' (clawdbot.clawdbot-agent), was published to Microsoft's official marketplace. The extension was designed to fetch attacker-controlled configuration and install a remote-access payload for persistent access.

See the full picture in Mallory

Mallory subscribers get deeper analysis on every story, including:

Impact Assessment

Who’s affected and how

Technical Details

Deep-dive technical analysis

Response Recommendations

Actionable next steps for your team

Indicators of Compromise

IPs, domains, hashes, and more

AI Threads

Ask questions and take action on every story

Advanced Filters

Filter by topic, classification, timeframe

Scheduled Alerts

Get matching stories delivered automatically

Related Stories

Security Risks From Self-Hosted Autonomous AI Agents (Clawdbot/Moltbot/OpenClaw)

Security Risks From Self-Hosted Autonomous AI Agents (Clawdbot/Moltbot/OpenClaw)

Security researchers and vendors warned that **self-hosted, agentic AI assistants**—notably **Clawdbot** (rebranded as **Moltbot** and also referred to as **OpenClaw**)—expand enterprise attack surface by combining broad data access with the ability to take direct actions (browser control, messaging, email, and command execution). Resecurity reported finding **hundreds of exposed deployments** reachable from the public Internet, frequently with **weak authentication, unsafe defaults, or misconfigurations** that could allow attackers to access **API keys/OAuth tokens**, retrieve **private chat histories**, and in some cases achieve **remote command execution** on the host. Dark Reading similarly highlighted that OpenClaw’s ecosystem can be undermined by **malicious “skills”** and fragile configuration/removal practices, reinforcing that these tools can be difficult to operate safely even when users attempt to limit permissions. CyberArk framed the issue as an **identity security** problem: autonomous agents often run with **user-level permissions** and integrate with platforms like *Slack*, *WhatsApp*, and *GitHub*, creating pathways for **credential/token theft, data leakage, and unauthorized actions** if the agent is exposed to untrusted content or deployed without strong controls. In contrast, Dark Reading’s coverage of **Shai-hulud** focuses on a separate threat—**self-propagating supply-chain worms targeting NPM projects**—and is not directly about autonomous AI agents, though it underscores the broader risk of downstream compromise when widely used components or ecosystems are poisoned.

2 months ago
Viral Moltbot AI Assistant Raises Security Concerns as Moltbook Agent Social Network Emerges

Viral Moltbot AI Assistant Raises Security Concerns as Moltbook Agent Social Network Emerges

The open-source AI assistant **Moltbot** (also referred to as *OpenClaw*) has gone viral due to its ability to autonomously perform real-world tasks on a user’s computer—interacting through common messaging platforms (e.g., iMessage, WhatsApp, Telegram, Discord, Slack, Signal) and integrating with personal accounts such as calendars and email. Coverage highlights that this broad access and autonomy materially increases risk, with recommendations to run the tool in an isolated environment (e.g., a dedicated machine) to reduce blast radius if the agent is compromised or behaves unexpectedly. A companion project, **Moltbook**, has rapidly scaled into a Reddit-style social network where AI agents can post and interact without human intervention, reportedly reaching tens of thousands of registered agent users and generating large volumes of automated content across many subcommunities. Moltbook operates via a downloadable “skill” configuration (a prompt/config file) that enables agents to post via API, creating additional exposure to prompt/config supply-chain risks and automated abuse; reporting frames the ecosystem’s growth as occurring alongside “deep security issues” inherent in highly-permissioned, plugin/skill-driven agent architectures.

1 months ago
Clawdbot Open-Source Agentic AI Assistants Raise Endpoint and Identity Security Risks

Clawdbot Open-Source Agentic AI Assistants Raise Endpoint and Identity Security Risks

The open-source agentic assistant **Clawdbot** rapidly went viral on GitHub (reported at ~24,000–25,000+ stars in a short period) and drew high-profile attention, with reports of engineers running it locally on always-on hardware such as **Mac minis**. Clawdbot is positioned as a “local-first” AI gateway that can be driven from common messaging platforms (e.g., Slack/Discord/Telegram) and can take real actions on a host—invoking terminals, running scripts, using a browser for web automation, and retaining “memory” over time—effectively operating with permissions similar to a human user account. Security commentary around Clawdbot emphasizes that agentic assistants change incident patterns because they can persist like service accounts while behaving like users, expanding the blast radius if compromised or misconfigured. Key risks highlighted include **shadow AI** adoption outside IT controls, inherited or over-granted permissions across chat and SaaS tools, data exposure via long-lived context/memory, and new attack paths such as prompt manipulation or “helpful” automation that executes unsafe actions on endpoints. The guidance focuses on SOC readiness: monitoring for unusual automation behaviors and access patterns consistent with an agent executing actions across endpoints and collaboration/SaaS environments, and treating these tools as a machine-identity and endpoint-control problem rather than a simple chatbot governance issue.

1 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed. Before adversaries strike.