Skip to main content
Mallory

Security Risks and Controls for AI-Powered Coding Assistants and Agents

ai-platform-securityextension-plugin-hijack
Updated March 21, 2026 at 03:16 PM2 sources
Share:
Security Risks and Controls for AI-Powered Coding Assistants and Agents

Get Ahead of Threats Like This

Know if you're exposed. Before adversaries strike.

The rapid adoption of AI-powered agents and coding assistants has introduced new security challenges, particularly as these systems gain deeper access to sensitive enterprise environments and proprietary codebases. Recent research and technical reviews highlight the need for robust information flow control mechanisms to prevent unauthorized data exposure and ensure that AI agents act within defined security boundaries. As AI agents evolve from passive tools to autonomous actors capable of executing workflows, approving access, and interacting with APIs, understanding and modeling their execution and decision-making processes becomes critical for effective risk management.

A focused security assessment of the Cursor AI coding assistant revealed three key vulnerabilities related to its deep integration with development workflows and privileged access to code repositories. The review emphasized the importance of ethical hacking and red teaming to uncover risks in third-party AI tools, especially those embedded in widely used platforms like Visual Studio Code. Security practitioners are encouraged to adopt formal models and reusable frameworks for auditing AI agents, ensuring that both the underlying technology and its operational context are thoroughly evaluated for potential threats.

Timeline

  1. Dec 1, 2025

    StackAware publicly discloses Cursor security findings

    StackAware published its findings on three security risks in Cursor and advised organizations to review repository permissions, raise employee awareness, and independently verify vendor security claims. The disclosure highlighted the risks of deeply integrated AI development tools handling sensitive code and credentials.

  2. Dec 1, 2025

    Cursor says reported behaviors are working as intended

    According to StackAware, Cursor responded to the disclosures by treating the reported issues as intended product behavior rather than vulnerabilities. StackAware said no substantive product changes were made following the disclosure process.

  3. Dec 1, 2025

    StackAware reports three security issues to Cursor

    StackAware responsibly disclosed three findings to Cursor: cross-user access to custom documentation definitions, default sharing of cloud agents via GitHub repository permissions, and a chained authentication abuse that could enable token replay and account takeover. The report noted the potential for sensitive data exposure and social engineering attacks.

  4. Dec 1, 2025

    StackAware begins ethical hacking assessment of Cursor

    StackAware conducted a security review of Cursor, an AI-powered code editor and assistant, to evaluate risks created by its access to codebases and remote AI services. The assessment identified multiple issues involving data exposure, shared cloud agent visibility, and authentication flow abuse.

See the full picture in Mallory

Mallory subscribers get deeper analysis on every story, including:

Impact Assessment

Who’s affected and how

Technical Details

Deep-dive technical analysis

Response Recommendations

Actionable next steps for your team

Indicators of Compromise

IPs, domains, hashes, and more

AI Threads

Ask questions and take action on every story

Advanced Filters

Filter by topic, classification, timeframe

Scheduled Alerts

Get matching stories delivered automatically

Sources

December 1, 2025 at 09:14 PM
December 1, 2025 at 12:00 AM

Related Stories

Security Risks and Best Practices in the Adoption of AI Coding Assistants

Security Risks and Best Practices in the Adoption of AI Coding Assistants

The rapid adoption of AI coding assistants is fundamentally transforming software development practices across the technology industry. Major companies such as Coinbase, Accenture, Box, Duolingo, Meta, and Shopify have begun mandating the use of AI coding assistants for their engineering teams, with some executives even taking drastic measures such as terminating employees who resist upskilling in AI. This widespread shift is driven by the significant productivity gains that AI coding assistants offer, enabling developers to accelerate deployment and experiment with new approaches. However, the integration of these tools introduces substantial new security challenges, particularly in the context of software supply chain security. Security researchers warn that AI-generated code often relies on existing libraries and codebases, which may contain old, vulnerable, or low-quality software. As a result, vulnerabilities that have previously existed can be reintroduced into new projects, and new security issues may also arise due to the lack of context-specific considerations in AI-generated code. The phenomenon known as "vibe coding"—where developers quickly adapt AI-generated code without fully understanding its implications—further exacerbates these risks. AI models trained on insecure or outdated data can perpetuate flaws, making it difficult for human reviewers to catch every potential vulnerability. The attack surface for organizations expands significantly as AI coding assistants become integral to the development lifecycle, potentially increasing risk by an order of magnitude. Security practitioners emphasize the need for new secure coding strategies tailored to the era of AI-assisted development. Effective communication between security teams and developers is critical to ensure that AI tools are adopted safely and that their benefits do not come at the expense of security. Organizations must rethink their development lifecycles, incorporating rigorous review processes and updated security protocols to address the unique challenges posed by AI-generated code. The transition to AI-driven development is inevitable, but it requires a proactive approach to risk management. Security teams must lead the way in establishing best practices, fostering collaboration, and ensuring that the adoption of AI coding assistants enhances rather than undermines organizational security. The industry is at a pivotal moment where the balance between productivity and security must be carefully managed. As AI coding assistants become non-negotiable tools for developers, the responsibility falls on both security professionals and engineers to adapt and safeguard the software supply chain. The future of secure software development will depend on how effectively organizations can integrate AI tools while mitigating the associated risks.

2 days ago
Security Risks in AI Coding Assistants: Prompt Injection and Dependency Hijacking

Security Risks in AI Coding Assistants: Prompt Injection and Dependency Hijacking

Security researchers have identified significant risks in AI-powered coding assistants, including Microsoft's Copilot and Claude Code, stemming from both prompt injection vulnerabilities and the potential for dependency hijacking via third-party plugins. In the case of Copilot, a security engineer disclosed several issues such as prompt injection leading to system prompt leaks, file upload policy bypasses using base64 encoding, and command execution within Copilot's isolated environment. Microsoft, however, has dismissed these findings as limitations of AI rather than true security vulnerabilities, sparking debate within the security community about the definition and handling of such risks. Separately, analysis of Claude Code highlights the dangers of plugin marketplaces, where third-party 'skills' can be enabled to automate tasks like dependency management. A technical review demonstrated how a seemingly benign plugin could redirect dependency installations to attacker-controlled sources, resulting in the silent introduction of trojanized libraries into development environments. These risks are compounded by the persistent nature of enabled plugins, which can continue to influence agent behavior and potentially compromise projects over time, underscoring the need for greater scrutiny and security controls in AI development tools.

1 months ago
Security and Risk Implications of AI Tools in the Enterprise

Security and Risk Implications of AI Tools in the Enterprise

Organizations are rapidly adopting artificial intelligence (AI) tools to enhance cybersecurity operations, streamline workflows, and improve productivity, but this trend introduces significant new risks and challenges. Reports indicate that cybersecurity professionals with AI security skills are in high demand, as companies seek to leverage AI for vulnerability management, threat detection, and automation of security tasks. The integration of AI into security teams’ arsenals is accelerating, with agentic AI tools becoming increasingly common for both defensive and operational purposes. However, the proliferation of AI-powered applications, such as AI notetakers in virtual meetings, raises concerns about data privacy, compliance, and the potential for sensitive information exposure. Many AI notetaking tools operate outside official enterprise systems, often lacking robust security controls such as SOC 2 certification, GDPR compliance, or strong encryption, making them vulnerable to data breaches and mishandling. The risk is compounded by the rapid spread of these tools within organizations, sometimes without proper vetting by legal, security, or procurement teams. Transcripts generated by these applications can be stored in third-party systems, increasing the risk of unauthorized access or legal discoverability. Security leaders are advised to develop clear policies and governance frameworks to manage the use of AI tools, ensuring that only approved applications with adequate security measures are deployed. The evolving landscape of AI in cybersecurity also includes increased merger and acquisition activity, as companies seek to acquire innovative AI security capabilities. Industry analysis highlights the need for continuous evaluation of AI models, such as DeepSeek, and the security implications of open-source agent frameworks like OpenAI’s AgentKit. The impact of AI-generated code on application security is another emerging concern, as automated code generation can introduce vulnerabilities if not properly reviewed. As AI becomes more embedded in business processes, organizations must balance the benefits of automation and efficiency with the imperative to safeguard sensitive data and maintain regulatory compliance. Security teams are encouraged to stay informed about the latest trends in AI security, invest in upskilling staff, and implement layered defenses to mitigate the unique risks posed by AI-driven tools. The convergence of AI and cybersecurity is reshaping the threat landscape, requiring proactive risk management and strategic investment in secure AI adoption.

1 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed. Before adversaries strike.