Skip to main content
Mallory

Clawdbot Open-Source Agentic AI Assistants Raise Endpoint and Identity Security Risks

ai-platform-securityidentity-authentication-vulnerabilityendpoint-software-vulnerabilityunmanaged-asset-discovery
Updated March 21, 2026 at 02:44 PM3 sources
Share:
Clawdbot Open-Source Agentic AI Assistants Raise Endpoint and Identity Security Risks

Get Ahead of Threats Like This

Know if you're exposed. Before adversaries strike.

The open-source agentic assistant Clawdbot rapidly went viral on GitHub (reported at ~24,000–25,000+ stars in a short period) and drew high-profile attention, with reports of engineers running it locally on always-on hardware such as Mac minis. Clawdbot is positioned as a “local-first” AI gateway that can be driven from common messaging platforms (e.g., Slack/Discord/Telegram) and can take real actions on a host—invoking terminals, running scripts, using a browser for web automation, and retaining “memory” over time—effectively operating with permissions similar to a human user account.

Security commentary around Clawdbot emphasizes that agentic assistants change incident patterns because they can persist like service accounts while behaving like users, expanding the blast radius if compromised or misconfigured. Key risks highlighted include shadow AI adoption outside IT controls, inherited or over-granted permissions across chat and SaaS tools, data exposure via long-lived context/memory, and new attack paths such as prompt manipulation or “helpful” automation that executes unsafe actions on endpoints. The guidance focuses on SOC readiness: monitoring for unusual automation behaviors and access patterns consistent with an agent executing actions across endpoints and collaboration/SaaS environments, and treating these tools as a machine-identity and endpoint-control problem rather than a simple chatbot governance issue.

Timeline

  1. Jan 28, 2026

    Reports link Clawdbot demand to increased Mac mini purchases

    A January 28, 2026 report claimed that demand for always-on local Clawdbot deployments was driving increased purchases of Mac minis, which were favored for stability, energy efficiency, and consumer-like browser fingerprinting. The same reporting advised running the agent on dedicated hardware or in a virtual machine because of the high privileges it requires.

  2. Jan 28, 2026

    Google AI Studio's Logan Kilpatrick publicly endorses Clawdbot

    A public endorsement from Logan Kilpatrick of Google AI Studio was cited as helping elevate Clawdbot's profile during its rise in popularity. The endorsement was reported alongside broader attention from the engineering community.

  3. Jan 28, 2026

    Clawdbot gains rapid popularity as an open-source AI agent

    By late January 2026, Clawdbot was described as rapidly gaining traction on GitHub as a local-first open-source AI agent capable of executing actions such as coding, email handling, scripting, and browser automation. Coverage characterized it as an "Open-Source Jarvis" and noted growing interest from engineers.

  4. Jan 27, 2026

    Security analysts warn Clawdbot-style agents create new IAM and SOC risks

    On January 27, 2026, security analysis pieces warned that Clawdbot-style agentic assistants introduce machine-identity, access-control, and monitoring challenges because they can retain context, use long-lived tokens, and execute actions across local systems and SaaS platforms. The reports highlighted risks including shadow AI deployments, prompt manipulation, excessive privileges, and insufficient isolation, while recommending controls such as sandboxing, Docker isolation, least-privilege scopes, log preservation, and rapid revocation of integrations.

See the full picture in Mallory

Mallory subscribers get deeper analysis on every story, including:

Impact Assessment

Who’s affected and how

Technical Details

Deep-dive technical analysis

Response Recommendations

Actionable next steps for your team

Indicators of Compromise

IPs, domains, hashes, and more

AI Threads

Ask questions and take action on every story

Advanced Filters

Filter by topic, classification, timeframe

Scheduled Alerts

Get matching stories delivered automatically

Related Stories

Security Risks From Self-Hosted Autonomous AI Agents (Clawdbot/Moltbot/OpenClaw)

Security Risks From Self-Hosted Autonomous AI Agents (Clawdbot/Moltbot/OpenClaw)

Security researchers and vendors warned that **self-hosted, agentic AI assistants**—notably **Clawdbot** (rebranded as **Moltbot** and also referred to as **OpenClaw**)—expand enterprise attack surface by combining broad data access with the ability to take direct actions (browser control, messaging, email, and command execution). Resecurity reported finding **hundreds of exposed deployments** reachable from the public Internet, frequently with **weak authentication, unsafe defaults, or misconfigurations** that could allow attackers to access **API keys/OAuth tokens**, retrieve **private chat histories**, and in some cases achieve **remote command execution** on the host. Dark Reading similarly highlighted that OpenClaw’s ecosystem can be undermined by **malicious “skills”** and fragile configuration/removal practices, reinforcing that these tools can be difficult to operate safely even when users attempt to limit permissions. CyberArk framed the issue as an **identity security** problem: autonomous agents often run with **user-level permissions** and integrate with platforms like *Slack*, *WhatsApp*, and *GitHub*, creating pathways for **credential/token theft, data leakage, and unauthorized actions** if the agent is exposed to untrusted content or deployed without strong controls. In contrast, Dark Reading’s coverage of **Shai-hulud** focuses on a separate threat—**self-propagating supply-chain worms targeting NPM projects**—and is not directly about autonomous AI agents, though it underscores the broader risk of downstream compromise when widely used components or ecosystems are poisoned.

2 months ago
Security Risks From OpenClaw ‘Sovereign’ AI Agents With Local Terminal Access

Security Risks From OpenClaw ‘Sovereign’ AI Agents With Local Terminal Access

**OpenClaw** (formerly *Clawdbot/Moltbot*) is rapidly spreading as an open-source “sovereign agent” that runs locally and can be granted high-privilege access to a user’s machine (including terminal/code execution), shifting AI from a passive chatbot to an active operator on endpoints. Trend Micro warns this model materially expands the attack surface by combining agent **access to files/commands**, **untrusted inputs** (e.g., messages/web/email), and **exfiltration paths**, and adds a fourth compounding risk—**persistence** via retained memory/state—creating conditions where prompt/instruction manipulation could translate into real system actions and data loss. Adoption is accelerating in China, where Shenzhen’s Longgang district proposed subsidies and an ecosystem to support OpenClaw-driven “one-person companies,” even as regulators and state media flag **data security and privacy** concerns tied to the tool’s ability to access personal and enterprise data. The reporting notes OpenClaw’s plug-in model support (including OpenAI, Anthropic, and Chinese model providers) and highlights official scrutiny amid China’s tightened data-privacy and export-control posture, underscoring that the primary risk is not a single vulnerability but the **operational security implications of deploying locally empowered AI agents** at scale.

4 weeks ago
Moltbot AI Assistant Adoption Drives Security Risks and Malware Impersonation

Moltbot AI Assistant Adoption Drives Security Risks and Malware Impersonation

The open-source agentic AI assistant **Moltbot** (formerly *Clawdbot*) rapidly gained developer adoption, but security researchers and media reporting warned that its “always-on” design and deep integrations can require broad access to sensitive accounts and credentials across messaging platforms and services. Reported risks include insecure deployments and misconfigurations that leave instances exposed to the internet, weak secret-handling practices (including plaintext storage on local filesystems), and the broader challenge that agentic tools can bypass traditional security boundaries unless operators implement strong controls such as least-privilege access, monitoring, encryption-at-rest, and sandboxing/containerization. Attackers also capitalized on Moltbot’s popularity by publishing a **fake Moltbot/Clawdbot VS Code extension** on Microsoft’s official Marketplace, despite Moltbot not having an official extension. The malicious extension (`clawdbot.clawdbot-agent`) was designed to run on IDE launch, fetch `config.json` from `clawdbot.getintwopc[.]site`, execute a dropped binary (`Code.exe`), and install a legitimate remote access tool (**ConnectWise ScreenConnect**) that connected to `meeting.bulletmailer[.]net:8041` for persistent attacker access; Microsoft removed the extension after it was reported.

1 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed. Before adversaries strike.