Skip to main content
Mallory

CometJacking Prompt Injection Attack in Perplexity's Comet AI Browser

ai-platform-securitydata-exfiltration-methodidentity-authentication-vulnerabilityai-enabled-threat-activity
Updated March 21, 2026 at 03:26 PM2 sources
Share:
CometJacking Prompt Injection Attack in Perplexity's Comet AI Browser

Get Ahead of Threats Like This

Know if you're exposed. Before adversaries strike.

A new attack technique called CometJacking has been identified, targeting Perplexity's Comet AI browser through prompt injection via URL parameters. By embedding malicious instructions in the collection parameter of a URL, attackers can direct the AI agent to access and exfiltrate sensitive data from connected services such as Gmail and Google Calendar, without requiring user credentials or interaction. LayerX researchers demonstrated that the AI browser could be manipulated to encode and send confidential information to an external endpoint, bypassing existing security checks and highlighting a fundamental vulnerability in current LLM-based systems.

The rise of AI-driven browsers and generative AI tools in the enterprise environment has significantly increased the risk of data exfiltration, with copy-paste actions into AI prompts now surpassing traditional file transfers as the primary vector for corporate data leaks. According to LayerX's Browser Security Report 2025, 77% of employees paste data into AI prompts, and a substantial portion of this activity occurs through personal accounts, making governance and monitoring more challenging. The report underscores the urgent need for organizations to implement stricter controls over AI tool usage, monitor clipboard and prompt activity for sensitive data, and adapt data loss prevention strategies to address the evolving threat landscape posed by AI-enabled browsers and prompt injection attacks like CometJacking.

Timeline

  1. Nov 11, 2025

    SC Media reports copy-paste surpasses file transfer for data exfiltration

    SC Media reported that copy-paste activity had overtaken file transfer as the leading corporate data exfiltration vector, reflecting a shift in how sensitive data is leaving organizations. The reference provides no earlier underlying event date, so the publication date is used.

  2. Nov 11, 2025

    Reports highlight prompt injection risks in AI browsers

    A Schneier on Security post discussed prompt injection in AI browsers, indicating growing attention to this attack class and its implications for browser-based AI assistants. No earlier event date is provided in the reference, so the publication date is used as the event date.

See the full picture in Mallory

Mallory subscribers get deeper analysis on every story, including:

Impact Assessment

Who’s affected and how

Technical Details

Deep-dive technical analysis

Response Recommendations

Actionable next steps for your team

Indicators of Compromise

IPs, domains, hashes, and more

AI Threads

Ask questions and take action on every story

Advanced Filters

Filter by topic, classification, timeframe

Scheduled Alerts

Get matching stories delivered automatically

Related Entities

Organizations

Sources

November 11, 2025 at 07:08 AM

Related Stories

CometJacking Prompt Injection Vulnerability in Perplexity's Comet AI Browser

CometJacking Prompt Injection Vulnerability in Perplexity's Comet AI Browser

Security researchers at LayerX have identified a critical security weakness in the Comet AI browser developed by Perplexity, which is susceptible to a novel prompt injection attack dubbed 'CometJacking.' The vulnerability allows attackers to craft malicious URLs that, when processed by the Comet browser, inject hidden instructions capable of accessing sensitive data from connected services such as email and calendar applications. The attack does not require user credentials or direct interaction, making it particularly dangerous and easy to exploit. By embedding malicious prompts in web pages, comment sections, or even code accessed by the browser, cybercriminals can instruct Comet to exfiltrate data residing in memory or accessible through its integrations. For example, if a user asks Comet to rewrite an email or schedule a meeting, the browser could be manipulated to extract and transmit the content and metadata of those communications to an external server controlled by the attacker. LayerX demonstrated a proof of concept where the browser was instructed to encode sensitive data in base64 and send it to a remote endpoint, successfully bypassing Perplexity's existing safeguards. The browser's agentic AI capabilities, which allow it to autonomously perform tasks like managing emails, shopping, and booking tickets, increase the potential impact of this vulnerability. Despite being notified of the issue in late August, Perplexity responded that the reported weakness was 'not applicable' and considered it beyond their control to remediate. Security experts warn that the rapid adoption of the Comet browser, combined with its integration with various personal and enterprise services, amplifies the risk of widespread data exfiltration if the vulnerability is exploited in the wild. The attack leverages the 'collection' parameter in the URL query string to deliver the malicious prompt, instructing the AI agent to consult its memory and connected services rather than simply searching the web. This method allows attackers to bypass direct data transmission restrictions implemented by Perplexity, as the AI agent itself is manipulated to perform the exfiltration. The vulnerability highlights the broader risks associated with agentic AI browsers that have deep integrations with user data and services. Security researchers emphasize the need for more robust safeguards and prompt injection defenses in AI-powered browsers to prevent similar attacks. The incident also raises questions about vendor responsibility and the challenges of securing AI-driven automation tools. Organizations using the Comet browser are advised to review their security posture and consider the risks of integrating sensitive services with agentic AI tools. The case underscores the importance of continuous security assessment and responsible disclosure in the rapidly evolving landscape of AI-powered applications. As the CometJacking technique requires only a crafted URL, it could be weaponized in phishing campaigns or embedded in seemingly innocuous web content, increasing the attack surface for potential victims. The ongoing debate between researchers and the vendor over the severity and remediability of the issue further complicates the response and mitigation efforts.

1 months ago
Prompt Injection Risks in Agentic AI and AI-Powered Browsers

Prompt Injection Risks in Agentic AI and AI-Powered Browsers

Security researchers reported that **prompt injection** is enabling practical attacks against *agentic AI* systems that have access to tools and user data, and argued the industry is underestimating the threat. A proposed framing, **“promptware,”** describes malicious prompts as a malware-like execution mechanism that can drive an LLM to take actions via its connected tools—potentially leading to **data exfiltration**, cross-system propagation, IoT manipulation, or even **arbitrary code execution**, depending on the permissions and integrations available. Trail of Bits disclosed results from an adversarial security assessment of Perplexity’s *Comet* browser, showing how prompt injection techniques could be used to **extract private information from authenticated sessions (e.g., Gmail)** by abusing the browser’s AI assistant and its tool access (such as reading page content, using browsing history, and interacting with the browser). Their threat-model-driven testing emphasized that agentic assistants can treat external web content as instructions unless it is explicitly handled as **untrusted input**, and they published recommendations intended to reduce prompt-injection-driven data paths between the user’s local trust zone (profiles/cookies/history) and vendor-hosted agent/chat services.

1 months ago
Prompt Injection and Persistent Memory Exploits in AI-Powered Browsers

Prompt Injection and Persistent Memory Exploits in AI-Powered Browsers

Researchers have identified critical security vulnerabilities in several AI-powered browsers, including OpenAI's Atlas and other emerging platforms such as Comet and Fellou. These browsers, which allow AI agents to perform actions on behalf of users, are susceptible to prompt injection attacks—where hidden or malicious instructions embedded in web content are executed by the AI. In documented cases, attackers were able to hide commands in web pages or images, leading the browser to perform unauthorized actions such as extracting email subject lines and exfiltrating data to attacker-controlled sites, all without user confirmation. A particularly severe exploit targets the persistent memory feature of the ChatGPT Atlas browser, introduced by OpenAI to personalize user experiences. By chaining a cross-site request forgery (CSRF) vulnerability with a memory write, attackers can inject malicious instructions that persist across sessions, devices, and even different browsers. This allows for ongoing compromise, including privilege escalation, malware deployment, and account takeover, unless users manually clear the tainted memory. The persistence and stealth of these attacks significantly elevate the risk profile for users of AI-enabled browsers, highlighting the urgent need for robust security controls and user awareness around prompt injection threats.

3 weeks ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed. Before adversaries strike.