Skip to main content
Mallory

Security Risks and Vulnerabilities in AI-Powered Developer Tools and Extensions

ai-platform-securityextension-plugin-hijackidentity-authentication-vulnerabilitylateral-movement-method
Updated March 21, 2026 at 02:57 PM2 sources
Share:
Security Risks and Vulnerabilities in AI-Powered Developer Tools and Extensions

Get Ahead of Threats Like This

Know if you're exposed. Before adversaries strike.

Security researchers have identified significant risks in AI-powered developer tools and browser extensions, highlighting how new AI capabilities can introduce novel attack vectors. In the case of Anthropic's Claude Chrome extension, researchers at Zenity Labs demonstrated that the extension, which allows the AI to browse and interact with websites on behalf of users, can expose sensitive data and perform actions using the user's credentials. This creates opportunities for indirect prompt injection attacks, where malicious instructions embedded in web content can manipulate the AI to perform harmful actions such as deleting files or sending unauthorized messages. The extension's persistent login state and ability to access private services like Google Drive and Slack further amplify the risk, as attackers could leverage the AI's access for lateral movement within organizations.

Similarly, security concerns have been raised about AI-powered integrated development environments (IDEs) forked from Microsoft VSCode, such as Cursor and Windsurf. These IDEs recommend extensions that do not exist in the OpenVSX registry, leaving unclaimed namespaces that threat actors could exploit to distribute malicious code. Researchers from Koi Security reported that some vendors responded by removing vulnerable recommendations, but others have yet to act. These findings underscore the urgent need for both vendors and users to reassess the security implications of integrating AI into development and productivity tools, as traditional security models may not adequately address the unique risks posed by AI-driven automation and extension ecosystems.

Timeline

  1. Jan 5, 2026

    Koi Security discloses VS Code fork extension supply-chain risk

    Koi Security publicly warned that AI-powered IDEs forked from VS Code, including Cursor, Windsurf, Google Antigravity, and Trae, could expose users to malicious recommended extensions because of unclaimed OpenVSX namespaces. Koi also said it had preemptively registered some affected namespaces and uploaded placeholder extensions with Eclipse Foundation coordination to prevent abuse.

  2. Jan 5, 2026

    Researchers disclose Claude Chrome extension security risks

    Zenity Labs reported that Anthropic's Claude Chrome extension could inherit users' authenticated web sessions and be abused through indirect prompt injection, unsafe actions, and JavaScript execution, potentially exposing sensitive data.

  3. Jan 1, 2026

    Google marks VS Code fork extension issue as fixed

    Google marked the recommended-extension namespace exposure issue as resolved in its IDE after removing the affected recommendations.

  4. Dec 26, 2025

    Google removes 13 risky extension recommendations

    Google removed 13 extension recommendations from its Antigravity IDE that could have mapped to unclaimed OpenVSX publisher namespaces.

  5. Dec 18, 2025

    Anthropic releases beta Claude Chrome extension

    Anthropic released the beta version of its Claude Chrome extension, enabling the AI assistant to browse and interact with websites on behalf of users.

  6. Dec 1, 2025

    Cursor fixes unsafe recommended extension mappings

    Cursor remediated the issue in its VS Code fork by fixing the problematic recommended extension behavior after Koi Security's disclosure.

  7. Nov 30, 2025

    Koi Security reports VS Code fork extension issue to vendors

    In late November 2025, Koi Security notified Google, Windsurf, and Cursor that several VS Code-based IDEs recommended extensions that did not exist in the OpenVSX registry, creating a supply-chain takeover risk.

See the full picture in Mallory

Mallory subscribers get deeper analysis on every story, including:

Impact Assessment

Who’s affected and how

Technical Details

Deep-dive technical analysis

Response Recommendations

Actionable next steps for your team

Indicators of Compromise

IPs, domains, hashes, and more

AI Threads

Ask questions and take action on every story

Advanced Filters

Filter by topic, classification, timeframe

Scheduled Alerts

Get matching stories delivered automatically

Related Stories

Malicious Extension Supply Chain Risk in AI-Powered VS Code Forks

Malicious Extension Supply Chain Risk in AI-Powered VS Code Forks

A critical security flaw has been identified in several popular AI-powered integrated development environments (IDEs) forked from Visual Studio Code, including Cursor, Windsurf, and Google Antigravity. These IDEs, which collectively serve millions of developers, were found to recommend extensions that do not exist in their supported OpenVSX marketplace. Because these extensions' namespaces were unclaimed, attackers could register them and upload malicious packages, which would then be presented as official recommendations to users. Security researchers demonstrated the risk by claiming these namespaces and uploading harmless placeholder extensions, which were still installed by over 1,000 developers, highlighting the high level of trust placed in automated extension suggestions. The vulnerability arises from inherited configuration files that point to Microsoft's extension marketplace, which these forks cannot legally use, leading to reliance on OpenVSX. Both file-based and software-based recommendations can trigger the installation prompt for these non-existent extensions, such as when opening an `azure-pipelines.yaml` file or detecting PostgreSQL on a system. The incident underscores a significant supply chain risk, as malicious actors could exploit this gap to distribute harmful code, potentially resulting in the theft of credentials, secrets, or source code. Vendor responses varied, with some IDEs addressing the issue promptly after disclosure, while others were slower to react.

1 months ago
Critical Vulnerabilities in AI-Powered Coding Tools Enable Data Exfiltration and Remote Code Execution

Critical Vulnerabilities in AI-Powered Coding Tools Enable Data Exfiltration and Remote Code Execution

Security researchers have disclosed over 30 vulnerabilities in a range of AI-powered Integrated Development Environments (IDEs) and coding assistants, collectively named 'IDEsaster.' These flaws, affecting popular tools such as Cursor, Windsurf, Kiro.dev, GitHub Copilot, Zed.dev, Roo Code, Junie, and Cline, allow attackers to chain prompt injection techniques with legitimate IDE features to achieve data exfiltration and remote code execution (RCE). The vulnerabilities exploit the fact that AI agents integrated into these environments can autonomously perform actions, bypassing traditional security boundaries and enabling attackers to hijack context, trigger unauthorized tool calls, and execute arbitrary commands. At least 24 of these vulnerabilities have been assigned CVE identifiers, highlighting the widespread and systemic nature of the risk. The research emphasizes that the integration of AI agents into development workflows introduces new attack surfaces, as these agents often operate with elevated privileges and insufficient threat modeling. Notably, the issues differ from previous prompt injection attacks by leveraging the AI agent's ability to activate legitimate IDE features for malicious purposes. Additional reporting confirms that critical CVEs have been issued for these tools, and broader industry analysis warns that nearly half of all AI-generated code contains exploitable flaws, with a particularly high vulnerability rate in Java. The findings underscore the urgent need for organizations using AI-driven development tools to reassess their security postures and apply available patches to mitigate the risk of data theft and RCE attacks.

1 months ago
Security Risks in AI Coding Assistants: Prompt Injection and Dependency Hijacking

Security Risks in AI Coding Assistants: Prompt Injection and Dependency Hijacking

Security researchers have identified significant risks in AI-powered coding assistants, including Microsoft's Copilot and Claude Code, stemming from both prompt injection vulnerabilities and the potential for dependency hijacking via third-party plugins. In the case of Copilot, a security engineer disclosed several issues such as prompt injection leading to system prompt leaks, file upload policy bypasses using base64 encoding, and command execution within Copilot's isolated environment. Microsoft, however, has dismissed these findings as limitations of AI rather than true security vulnerabilities, sparking debate within the security community about the definition and handling of such risks. Separately, analysis of Claude Code highlights the dangers of plugin marketplaces, where third-party 'skills' can be enabled to automate tasks like dependency management. A technical review demonstrated how a seemingly benign plugin could redirect dependency installations to attacker-controlled sources, resulting in the silent introduction of trojanized libraries into development environments. These risks are compounded by the persistent nature of enabled plugins, which can continue to influence agent behavior and potentially compromise projects over time, underscoring the need for greater scrutiny and security controls in AI development tools.

1 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed. Before adversaries strike.

Security Risks and Vulnerabilities in AI-Powered Developer Tools and Extensions | Mallory