Skip to main content
Mallory

Agentic AI Adoption and Emerging Security Risks in AI Agents

ai-platform-securitycybersecurity-regulationstandards-framework-update
Updated March 21, 2026 at 04:02 PM4 sources
Share:
Agentic AI Adoption and Emerging Security Risks in AI Agents

Get Ahead of Threats Like This

Know if you're exposed. Before adversaries strike.

Enterprises and public-sector organizations are accelerating adoption of AI agents and generative AI to automate knowledge work and software delivery, with guidance increasingly framed as a management and governance problem rather than a purely technical one. Commentary on agentic AI in software development describes agents as autonomous decision loops operating within guardrails (goal decomposition, tool selection, execution, observation, and iteration), enabled by mature CI/CD automation and API-driven infrastructure. Separate reporting highlights empirical findings that AI-generated code has grown to nearly 30% of code by late 2024 and is associated with an estimated ~4% productivity lift, with gains concentrated among more experienced developers despite higher usage among less-experienced staff.

Security and procurement implications are emerging alongside this adoption. Research on agentic tool chain attacks warns that AI agents’ “reasoning layer” and natural-language tool metadata become an attack surface, enabling techniques such as tool poisoning, tool shadowing, and “rugpull” behavior that can lead to covert data leakage or unauthorized actions; the risk is amplified when tools are centralized via architectures like the Model Context Protocol (MCP), where compromise of a shared tool server can propagate malicious behavior across many agents. In the US federal context, agencies are signaling demand for AI tools that deliver operational value while meeting requirements for security, transparency, and responsible use, and the General Services Administration is also tightening contractor cybersecurity expectations for work involving CUI by requiring alignment with NIST SP 800-171 (and select 800-172 controls), including MFA, encryption, vulnerability remediation, and removal of end-of-life components, with independent assessments as part of authorization and ongoing monitoring.

Timeline

  1. Feb 1, 2026

    SecuritySenses recommends guarded rollout of agentic AI in development

    SecuritySenses described growing real-world use of agentic AI across software development and advised organizations to keep humans in the loop, apply strict guardrails and logging, and start with low-risk use cases before expanding autonomy.

  2. Jan 31, 2026

    ZDNET outlines criteria for delegating work to AI agents

    ZDNET reported guidance from Ethan Mollick that organizations should decide whether to delegate tasks to AI agents using three measures: human baseline time, probability of success, and total AI process time including review.

  3. Jan 31, 2026

    ZDNET reports senior developers gain most from generative AI

    ZDNET summarized the Complexity Science Hub findings that less-experienced developers use generative AI more often, but measurable productivity and exploration gains accrue mainly to senior developers who are better able to evaluate AI output.

  4. Jan 30, 2026

    CrowdStrike describes 'agentic tool chain attacks' against AI agents

    CrowdStrike published research defining 'agentic tool chain attacks' as threats targeting the reasoning layer of AI agents through tool descriptions, metadata, and parameter construction rather than traditional code boundaries. The report detailed tool poisoning, tool shadowing, and rugpull attacks, especially in Model Context Protocol environments.

  5. Dec 31, 2024

    AI-generated code reaches nearly 30% by end of 2024

    The Complexity Science Hub study reported that AI-generated code rose sharply to nearly 30% by the end of 2024, alongside an estimated productivity increase of close to 4% for programmers overall.

  6. Jan 1, 2022

    CSH study begins tracking rise in AI-generated code

    A Complexity Science Hub study found AI-generated code accounted for about 5% of code in 2022, establishing an early baseline for generative AI use in software development.

See the full picture in Mallory

Mallory subscribers get deeper analysis on every story, including:

Impact Assessment

Who’s affected and how

Technical Details

Deep-dive technical analysis

Response Recommendations

Actionable next steps for your team

Indicators of Compromise

IPs, domains, hashes, and more

AI Threads

Ask questions and take action on every story

Advanced Filters

Filter by topic, classification, timeframe

Scheduled Alerts

Get matching stories delivered automatically

Related Stories

Security and Risk Implications of Agentic AI and AI-Generated Code in the Enterprise

Security and Risk Implications of Agentic AI and AI-Generated Code in the Enterprise

The rapid integration of agentic AI systems and AI-generated code into enterprise environments is fundamentally transforming business operations, productivity, and the cybersecurity landscape. AI agents are now embedded in daily workflows, automating tasks and augmenting human capabilities, but their lack of human intuition and ethical judgment introduces new attack surfaces and vulnerabilities. Security experts warn that the rush to deploy agentic AI—autonomous systems capable of executing complex, multistep tasks—without adequate governance or oversight is creating significant risks, including the "confused deputy" problem, where AI agents can be manipulated to misuse their privileges. The proliferation of AI-generated code further compounds these risks, as studies show a high prevalence of design flaws and security vulnerabilities in code produced by large language models, leading to increased technical debt and instability in software delivery. Organizations face mounting challenges in managing accountability and liability as AI systems act with greater autonomy. The lack of robust AI governance policies leaves enterprises exposed to breaches and regulatory risks, with a majority of organizations unprepared to manage the proliferation of "shadow AI." The surge in AI-driven web traffic is disrupting traditional business models in publishing and ecommerce, while adversaries exploit the gap between human and machine decision-making. Security leaders emphasize the need for human oversight, strong identity governance, and comprehensive risk management strategies to address the dual-front of human-AI business risk and to ensure that AI adoption does not outpace the organization’s ability to secure and govern these powerful new tools.

1 months ago
Enterprise Security Risks From Agentic and Generative AI Deployments

Enterprise Security Risks From Agentic and Generative AI Deployments

Enterprises are rapidly integrating **agentic AI** assistants with high-privilege connections to ticketing systems, source code repositories, chat platforms, and cloud dashboards, enabling actions such as opening pull requests, querying internal databases, and triggering automated workflows with limited human oversight. Reporting citing Cisco’s *State of AI Security 2026* indicates many organizations are moving forward with these deployments despite low security readiness, expanding exposure across model interfaces, tool integrations, and the broader supply chain. Multiple sources highlight that attacker techniques against AI systems are maturing, particularly **prompt injection/jailbreaks** and multi-turn attacks that exploit session state, memory, and tool-calling to drive unsafe actions or data leakage. Separately, adversaries are using generative AI for **deepfake-enabled social engineering** (including video/voice impersonation to bypass identity verification and authorize sensitive actions) and for scalable brand impersonation via malicious ad campaigns; one widely cited example involved Arup, where a deepfake video call led to authorization of a fraudulent HK$200 million transfer. Overall, the material is primarily risk and threat reporting (not a single incident), emphasizing that AI systems’ contextual behavior and privileged integrations create new control gaps that traditional security testing and defenses may not detect.

1 months ago
Security Risks and Offensive Potential of Agentic AI and Automated Vulnerability Discovery

Security Risks and Offensive Potential of Agentic AI and Automated Vulnerability Discovery

Security leaders are warning that **AI agents are increasingly operating as “digital employees”** inside enterprise workflows—triaging alerts, coordinating investigations, and moving work across security tools—often with **broad permissions and limited governance**. The core risk highlighted is that organizations are deploying high-authority agents like plug-ins (reused service accounts, overbroad roles, weak oversight), creating fast-acting operators that can be manipulated and that lack the contextual judgment and policy awareness expected of human staff. Related commentary also raises concerns about **AI-to-AI communication** and “non-human-readable” behaviors that could reduce auditability and complicate investigations and control enforcement. In parallel, public examples show how quickly AI can accelerate **vulnerability discovery**: Microsoft Azure CTO Mark Russinovich reported using *Claude Opus 4.6* to decompile decades-old Apple II 6502 machine code and identify multiple issues, underscoring that similar techniques could be applied to **embedded/legacy firmware at scale**. Anthropic has also cautioned that advanced models can find high-severity flaws even in heavily tested codebases, reinforcing the likelihood that both defenders and attackers will leverage AI for faster bug-finding. Separate enterprise IT coverage notes that organizations are **reallocating budgets toward AI** by consolidating tools and renegotiating contracts, which can indirectly increase security exposure if cost-cutting reduces overlapping controls or if AI adoption outpaces governance and identity/access management maturity.

2 days ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed. Before adversaries strike.