Skip to main content
Mallory

Privacy and Security Risks in AI-Powered Browser Agents

ai-platform-securityendpoint-software-vulnerabilitywidely-deployed-product-advisory
Updated March 21, 2026 at 03:03 PM2 sources
Share:
Privacy and Security Risks in AI-Powered Browser Agents

Get Ahead of Threats Like This

Know if you're exposed. Before adversaries strike.

A recent academic study has revealed significant privacy and security vulnerabilities in eight popular AI-powered browser agents, including ChatGPT Agent, Google Project Mariner, and Amazon Nova Act. The research identified 30 vulnerabilities across areas such as agent architecture, handling of unsafe sites, cross-site tracking, and the disclosure of personal data. Notably, most agents rely on off-device language models, resulting in sensitive user data being transmitted to third-party servers, and some agents were found to use outdated browsers with known security flaws, increasing the risk of exploitation.

In response to these emerging threats, OpenAI has implemented continuous security hardening for its ChatGPT Atlas browser agent, focusing particularly on defending against prompt injection attacks. Leveraging automated red teaming and reinforcement learning, OpenAI has proactively identified and mitigated new classes of prompt-injection exploits, recently shipping a security update with adversarially trained models and enhanced safeguards. These efforts underscore the ongoing challenge of securing AI-driven browser agents as they become increasingly integrated into user workflows and targeted by adversaries.

Timeline

  1. Dec 22, 2025

    OpenAI publishes work on hardening ChatGPT Atlas against prompt injection

    OpenAI published a blog post describing ongoing efforts to strengthen ChatGPT Atlas against prompt injection attacks. The reference indicates a public disclosure of defensive work, but provides no further event details in the supplied content.

  2. Dec 22, 2025

    Researchers recommend privacy-focused improvements for browser agents

    Following the study, the researchers urged browser-agent developers to work with privacy experts and adopt automated test suites to improve privacy protections. They also said they plan to release additional tools and datasets to support ongoing privacy testing.

  3. Dec 22, 2025

    Academic study evaluates eight browser agents for privacy and security risks

    A 2025 academic study assessed eight popular browser agents, including ChatGPT Agent, Google Project Mariner, and Amazon Nova Act, and identified 30 vulnerabilities across five privacy and security risk areas. The findings included issues such as off-device language model use, outdated browser versions, weak phishing and TLS warning handling, cross-site tracking weaknesses, automatic acceptance of privacy prompts, and unnecessary disclosure of personal data.

See the full picture in Mallory

Mallory subscribers get deeper analysis on every story, including:

Impact Assessment

Who’s affected and how

Technical Details

Deep-dive technical analysis

Response Recommendations

Actionable next steps for your team

Indicators of Compromise

IPs, domains, hashes, and more

AI Threads

Ask questions and take action on every story

Advanced Filters

Filter by topic, classification, timeframe

Scheduled Alerts

Get matching stories delivered automatically

Sources

December 22, 2025 at 12:00 AM

Related Stories

Security Risks of AI-Powered Web Browsers and Tools

Security Risks of AI-Powered Web Browsers and Tools

Security experts have raised concerns about the rapid adoption of AI-powered browsers and tools, highlighting significant risks such as prompt injection, data theft, and privacy violations. The launch of new AI browsers like OpenAI's Atlas, which integrates ChatGPT directly into the browsing experience, has brought these issues to the forefront. These browsers can access and process user data from web sessions, increasing the potential for sensitive information to be inadvertently exposed or misused. Industry leaders and researchers warn that the security and privacy safeguards for these AI-driven applications are lagging behind their rapid development and deployment. The proliferation of AI assistants, chatbots, and smart browsers has outpaced the implementation of robust security controls, leaving users vulnerable to both accidental and malicious data leaks. Experts emphasize that while AI tools promise increased productivity and convenience, they also introduce new attack surfaces and amplify existing threats, such as phishing and unauthorized data access. The lack of clear boundaries and insufficient training for AI systems to recognize confidential information further exacerbates these risks, underscoring the urgent need for organizations and developers to prioritize security and privacy in the design and deployment of AI-powered web technologies.

1 months ago
Prompt Injection and Browser-Based AI Security Risks

Prompt Injection and Browser-Based AI Security Risks

The launch of ChatGPT Atlas, an AI-powered web browser with agentic capabilities, has raised significant concerns about prompt injection attacks. As browsers become more integrated with large language models (LLMs), attackers can exploit both direct and indirect prompt injection techniques to manipulate AI agents, potentially causing them to divulge sensitive information or perform unintended actions. The accessibility of such agentic browsers, combined with their ability to automate complex tasks, amplifies the risk landscape for organizations adopting these technologies. Security experts warn that the browser now represents a critical control point for AI security, as it serves as the main interface between users and generative AI systems. The rapid increase in GenAI browser traffic has led to a surge in data security incidents, including inadvertent exposure of confidential information through LLM prompts. Traditional network security measures are often insufficient to address these browser-borne threats, making it imperative for organizations to reassess their security strategies and implement controls specifically designed to mitigate risks associated with AI-powered browsers and prompt injection attacks.

1 months ago
Prompt Poaching and Injection Threats in AI Browser Extensions and Agents

Prompt Poaching and Injection Threats in AI Browser Extensions and Agents

Browser extensions, particularly those from web analytics companies like Similarweb, have been found to engage in 'prompt poaching' by capturing and exfiltrating user conversations with AI chat platforms. The Similarweb extension, installed by over a million users, was discovered to collect not only clickstream data but also sensitive AI prompts and responses, significantly escalating privacy risks. This data collection is often enabled through remote configuration updates that allow the extension to scrape targeted web pages and monitor user interactions with AI tools, raising concerns about the exploitation of browser extensions as a vector for harvesting private information. In parallel, OpenAI has responded to the growing threat of prompt injection attacks against its ChatGPT Atlas browser agent by deploying new model-level and system-level defenses. Prompt injection attacks involve embedding malicious instructions in web content to manipulate AI agents into performing unintended actions, such as exfiltrating sensitive data. OpenAI's update includes automated red-teaming using reinforcement learning to proactively identify and mitigate sophisticated prompt injection techniques, highlighting the evolving security landscape for AI-powered browser tools and the need for robust defenses against both extension-based data harvesting and adversarial prompt manipulation.

1 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed. Before adversaries strike.

Privacy and Security Risks in AI-Powered Browser Agents | Mallory