Prompt-injection RCE risks in agentic AI tools with OS and browser automation
Security researchers and CERT/CC reporting highlighted critical prompt-injection-to-execution paths in agentic AI systems where untrusted content can be interpreted as instructions and then executed via connected tools. In ModelScope MS-Agent, CVE-2026-2256 (CVSS 9.8) was reported as a command injection / RCE issue tied to the framework’s “Shell tool,” where external input is not properly sanitized before being passed to OS command execution; a check_safe() denylist-based filter was described as bypassable via obfuscation/alternate syntax, enabling arbitrary command execution and potential full host compromise.
Separate research from Zenity Labs described a broader class of agentic AI browser weaknesses (including Perplexity’s Comet) where attackers can hijack autonomous workflows using indirect prompt injection delivered through normal channels such as a calendar invite; prior to patches, this could drive the browser to access local files, read directories/files, and exfiltrate data, and in some cases leverage the agent’s existing authenticated context to interact with sensitive services (including password managers). A similar execution-model risk was reported in Langflow’s CSV Agent as CVE-2026-27966 (CVSS 10.0), where allow_dangerous_code=True was hardcoded, enabling LangChain’s python_repl_ast tool and allowing remote attackers with chat access to coerce server-side code execution and full system compromise via prompt injection.
Timeline
Mar 4, 2026
No patch available for MS-Agent at disclosure
At the time CVE-2026-2256 was disclosed, no vendor patch or official statement was available for ModelScope MS-Agent. Recommended mitigations included sandboxing, least privilege, validating trusted content, and replacing denylist filtering with allowlists.
Mar 4, 2026
CERT/CC discloses critical MS-Agent command execution flaw
CERT/CC disclosed CVE-2026-2256, a critical vulnerability in ModelScope MS-Agent that allows prompt-injection-style inputs to trigger malicious OS commands through the built-in Shell tool. The flaw could lead to remote code execution, data theft, file tampering, persistence, and lateral movement.
Mar 3, 2026
Zenity publicly discloses agentic AI browser hijacking research
Zenity Labs publicly disclosed a suite of vulnerabilities in agentic AI browsers, including Perplexity Comet, showing that prompt injection via legitimate calendar invites could lead to file access, data exfiltration, and password manager abuse. The research highlighted weak trust boundaries between user intent and agent execution.
Mar 3, 2026
Langflow recommends upgrade to version 1.8.0
Langflow's official security advisory recommended updating to version 1.8.0 to remediate CVE-2026-27966. The update changed the default behavior to prevent automatic execution of dangerous code.
Mar 3, 2026
Langflow discloses critical CSV Agent RCE flaw
A critical Langflow AI CSV Agent vulnerability, tracked as CVE-2026-27966 and rated 10.0, was publicly disclosed. The flaw stemmed from the CSV Agent node being hardcoded with allow_dangerous_code=True, enabling prompt injection to trigger Python and OS command execution on the server.
Feb 1, 2026
Perplexity fixes Comet AI browser vulnerabilities
Perplexity issued a fix for the reported Comet vulnerabilities in February 2026. The patch addressed issues that previously allowed attacker-controlled content to be treated as user intent and trigger sensitive autonomous actions.
Jan 1, 2025
Perplexity notified of agentic AI browser vulnerabilities
Zenity Labs reported multiple prompt-injection vulnerabilities affecting agentic AI browsers, including Perplexity Comet, to Perplexity in 2025. The issues showed that malicious calendar invites and indirect prompts could hijack browser agents, access local files, and abuse connected tools.
See the full picture in Mallory
Mallory subscribers get deeper analysis on every story, including:
Who’s affected and how
Deep-dive technical analysis
Actionable next steps for your team
IPs, domains, hashes, and more
Ask questions and take action on every story
Filter by topic, classification, timeframe
Get matching stories delivered automatically
Related Entities
Vulnerabilities
Affected Products
Sources
Related Stories

Prompt Injection Risks in Agentic AI and AI-Powered Browsers
Security researchers reported that **prompt injection** is enabling practical attacks against *agentic AI* systems that have access to tools and user data, and argued the industry is underestimating the threat. A proposed framing, **“promptware,”** describes malicious prompts as a malware-like execution mechanism that can drive an LLM to take actions via its connected tools—potentially leading to **data exfiltration**, cross-system propagation, IoT manipulation, or even **arbitrary code execution**, depending on the permissions and integrations available. Trail of Bits disclosed results from an adversarial security assessment of Perplexity’s *Comet* browser, showing how prompt injection techniques could be used to **extract private information from authenticated sessions (e.g., Gmail)** by abusing the browser’s AI assistant and its tool access (such as reading page content, using browsing history, and interacting with the browser). Their threat-model-driven testing emphasized that agentic assistants can treat external web content as instructions unless it is explicitly handled as **untrusted input**, and they published recommendations intended to reduce prompt-injection-driven data paths between the user’s local trust zone (profiles/cookies/history) and vendor-hosted agent/chat services.
1 months ago
Indirect Prompt Injection and Data Exfiltration Risks in Enterprise AI Agents
Security researchers warned that **AI agents and retrieval-augmented generation (RAG) systems** can be turned into data-exfiltration channels when attackers poison inputs or embed malicious instructions in content the model is expected to process. One report described a **0-click indirect prompt injection** against *OpenClaw* agents in which hidden instructions cause the agent to generate an attacker-controlled URL containing sensitive data such as API keys or private conversations in query parameters; messaging platforms like *Telegram* or *Discord* can then automatically request that URL for link previews, silently delivering the data to the attacker. The same reporting noted concerns about insecure defaults that allow agents to browse, execute tasks, and access local files, expanding the blast radius of prompt-injection abuse. Related analysis highlighted that the same core weakness extends beyond standalone agents to **enterprise RAG deployments**, where the integrity of the knowledge base becomes part of the security boundary. If attackers can poison indexed documents in systems such as SharePoint or Confluence, they can manipulate retrieval results and influence model outputs, including security workflows and analyst guidance. Broader commentary on **agentic AI threat convergence** reinforced that prompt engineering is no longer just a productivity technique but an emerging exploit class, with adversaries using prompt injection and context manipulation against AI-enabled security operations. Together, the reporting shows that enterprise AI risk increasingly depends on controlling untrusted content, hardening agent permissions, and treating prompts, retrieved documents, and downstream integrations as attack surfaces.
1 weeks ago
Indirect Prompt Injection and AI Agent Abuse Expands Real-World Attack Surface
Security researchers and industry reporting describe **prompt injection—especially web-based indirect prompt injection (IDPI)**—as an increasingly practical technique for compromising or manipulating **LLM-powered agents** embedded in browsers and automated content pipelines. Palo Alto Networks Unit 42 reported in-the-wild IDPI activity where malicious instructions are hidden in web content that an agent later ingests, with observed objectives including **AI-based ad review evasion** and **SEO manipulation** that promotes phishing infrastructure. Separately, Zenity Labs detailed a now-patched issue in Perplexity’s *Comet* AI browser where attackers could embed instructions in a **calendar invite** to coerce the agent into accessing `file://` resources and potentially pivoting into sensitive data such as an unlocked **1Password** extension vault, illustrating how agentic tooling can bypass traditional browser-origin assumptions. Threat reporting also shows adversaries operationalizing AI to scale exploitation. Team Cymru linked an AI-assisted Fortinet FortiGate targeting campaign (previously reported by Amazon Threat Intelligence as compromising **600+ devices across 55 countries** using services like **Claude** and **DeepSeek**) to use of **CyberStrikeAI**, an open-source Go-based platform that integrates 100+ security tools and was observed from multiple IPs (primarily hosted in China/Singapore/Hong Kong, with additional infrastructure elsewhere). Multiple commentaries and briefings emphasize that conventional “filter the prompt” defenses are insufficient because LLMs lack a native separation between instructions and data; they call for **defense-in-depth** around AI pipelines, including least-privilege agent permissions, auditable tool use, and stronger identity/workload controls as agent deployments multiply. Several items in the set are unrelated (geopolitical cyber activity, workforce/culture pieces, jobs, and product/market commentary) and do not materially inform the prompt-injection/agent-abuse story.
1 months ago