Security Risk Management for Agentic AI in Browsers and Applications
Security teams are increasingly treating agentic AI—systems that can interpret untrusted content and take actions—as a new class of enterprise risk that breaks assumptions in traditional security and threat modeling. As AI moves into user-facing workflows, especially where models can reason over web content and instructions, untrusted inputs can influence behavior in ways that resemble inferred intent rather than explicit user actions, expanding the attack surface beyond conventional boundaries designed for deterministic software.
Socradar highlighted that AI-based browsers vary materially in risk: “assistant” modes (e.g., summarization on request) can often be governed with existing controls, while agentic browsers that autonomously navigate and act within a user session introduce risks that classic browser security models were not designed to contain—particularly when page text/metadata becomes model input. Microsoft emphasized that threat modeling for AI applications must adapt because generative/agentic systems are probabilistic, have uneven performance across languages and contexts, and treat conversation/instructions as part of a single input stream; this requires planning for rare but high-impact failure modes and adversarial manipulation rather than relying on predictable code paths and stable input/output behavior.
Timeline
Apr 13, 2026
Varonis details architectural attack surfaces in agentic LLM browsers
Varonis Threat Labs published research arguing that agentic LLM browsers introduce a privileged control layer that can bypass traditional browser security boundaries. The report analyzed products including Perplexity Comet, OpenAI Atlas, Microsoft Edge Copilot, and Brave Leo, highlighting trusted communication bridges and noting that some title-based prompt injection was fixed during the research.
Feb 26, 2026
SOCRadar analyzes security risks in AI-based browsers
SOCRadar published an analysis warning that agentic AI browsers introduce structural security risks because untrusted web, email, and file content can influence model behavior. The article highlighted attack classes including indirect and multimodal prompt injection, privacy profiling risks, and examples such as EchoLeak, cross-tab data exposure in Perplexity Comet, HashJack, and omnibox intent ambiguity in OpenAI Atlas.
Feb 26, 2026
Microsoft publishes guidance on threat modeling AI applications
Microsoft Security Blog published guidance arguing that traditional threat modeling must be adapted for generative and agentic AI systems. The post outlined AI-specific risks such as prompt injection, tool misuse, privilege escalation, data exfiltration, and harmful outputs, and recommended mitigations including least privilege, separation of instructions from untrusted content, and stronger observability.
Jan 16, 2026
Wiz reviews 2025 security failures and defenses in agentic browsers
A Wiz Blog year-end review described how mainstream adoption of agentic browsers in 2025 drove extensive offensive research into prompt injection, phishing, data exfiltration, session poisoning, and task hijacking across products from OpenAI, Perplexity, Opera, and others. It also summarized vendor responses including human-in-the-loop confirmations, architectural isolation, reinforcement learning-based hardening, secondary model critics, and access restrictions, while noting prompt injection remained unresolved.
See the full picture in Mallory
Mallory subscribers get deeper analysis on every story, including:
Who’s affected and how
Deep-dive technical analysis
Actionable next steps for your team
IPs, domains, hashes, and more
Ask questions and take action on every story
Filter by topic, classification, timeframe
Get matching stories delivered automatically
Related Entities
Vulnerabilities
Organizations
Affected Products
Sources
Related Stories

AI Agentic Browsers and Automation Pose New Security Risks
The rapid adoption of agentic AI technologies in enterprise environments is leading to significant security incidents and operational disasters. High-profile cases, such as the accidental deletion of an entire codebase by an AI agent at Replit, highlight the risks of deploying autonomous AI systems without robust governance and oversight. Experts warn that as organizations integrate more AI agents capable of taking independent actions, the likelihood of unintended and potentially catastrophic outcomes will increase. Security professionals emphasize the need for comprehensive planning, internal governance committees, and new technical safeguards to prevent such incidents from proliferating as AI becomes more deeply embedded in business processes. The emergence of 'agentic' AI browsers marks a fundamental shift in the threat landscape, transforming browsers from passive tools into active, autonomous agents capable of executing tasks on behalf of users. This evolution introduces new attack surfaces and challenges for security teams, as these browsers can now interact with web content, initiate transactions, and potentially make decisions without direct human oversight. Security experts urge organizations to reassess their risk models, implement stricter controls, and prepare for a future where AI-driven automation can both enhance productivity and introduce novel vulnerabilities that traditional security measures may not address.
1 months ago
AI Agents and Agentic Browsers Introduce New Enterprise Security Risks
Commentary and research coverage warned that **AI agents**—including “agentic browsers” that act on a user’s behalf—are creating new enterprise attack surfaces faster than governance and detection are maturing. SC Media argued organizations should not wait for NIST guidance to treat AI agents as a security priority, citing a lack of visibility into what agents can access and do (tools, data, actions, and auditability) and emphasizing that agents can cause harm through authorized-but-dangerous actions even without traditional “compromise.” Dark Reading, citing **Trail of Bits** research, described how agentic browsers can undermine decades of browser security hardening by treating the agent as a trusted proxy that can traverse tabs and local resources, weakening isolation assumptions that underpin controls like the **same-origin policy**. Trail of Bits’ findings highlighted practical abuse paths where attackers can manipulate an agent’s context (for example via **reflected XSS**) and then induce **data exfiltration** by persuading the agent to send local or cross-tab information to attacker-controlled infrastructure—classes of attacks that modern browsers have made significantly harder for direct user sessions. Other items in the set were general risk-management or unrelated vulnerability/news pieces (e.g., Oracle patch volume, GitLab 2FA bypass, cloud demo misconfigurations, and enterprise browser selection guidance) and did not materially add to the specific story about agentic/AI-agent security regression and emerging exploitation techniques.
1 months ago
Enterprise Security Risks of AI-Enabled Web Browsers
Gartner has issued a warning to businesses about the adoption of AI-powered or agentic web browsers, citing significant cybersecurity risks associated with these emerging technologies. These browsers, developed by both major vendors and new entrants such as OpenAI and Perplexity, offer advanced automation, content summarization, and workflow management features. However, Gartner's advisory urges CISOs to block all AI browsers for the foreseeable future, emphasizing that the convenience and efficiency gains do not outweigh the current security concerns, which include potential data leakage, unauthorized access, and the immaturity of security controls in these products. Industry experts echo the need for caution, highlighting that while AI browsers can streamline research and personalization, they also introduce new attack surfaces and risks related to credential theft, session hijacking, and exposure of sensitive information. The rapid integration of AI into browsers has outpaced the development of robust governance, observability, and lifecycle management practices, making it critical for organizations to prioritize security and oversight before deploying these tools in business environments.
1 months ago