Skip to main content
Mallory

Enterprise Security Risks From Agentic and Generative AI Deployments

ai-platform-securityai-enabled-threat-activityidentity-impersonation-fraudvoice-social-engineering
Updated March 21, 2026 at 02:20 PM3 sources
Share:
Enterprise Security Risks From Agentic and Generative AI Deployments

Get Ahead of Threats Like This

Know if you're exposed. Before adversaries strike.

Enterprises are rapidly integrating agentic AI assistants with high-privilege connections to ticketing systems, source code repositories, chat platforms, and cloud dashboards, enabling actions such as opening pull requests, querying internal databases, and triggering automated workflows with limited human oversight. Reporting citing Cisco’s State of AI Security 2026 indicates many organizations are moving forward with these deployments despite low security readiness, expanding exposure across model interfaces, tool integrations, and the broader supply chain.

Multiple sources highlight that attacker techniques against AI systems are maturing, particularly prompt injection/jailbreaks and multi-turn attacks that exploit session state, memory, and tool-calling to drive unsafe actions or data leakage. Separately, adversaries are using generative AI for deepfake-enabled social engineering (including video/voice impersonation to bypass identity verification and authorize sensitive actions) and for scalable brand impersonation via malicious ad campaigns; one widely cited example involved Arup, where a deepfake video call led to authorization of a fraudulent HK$200 million transfer. Overall, the material is primarily risk and threat reporting (not a single incident), emphasizing that AI systems’ contextual behavior and privileged integrations create new control gaps that traditional security testing and defenses may not detect.

Timeline

  1. Feb 23, 2026

    Industry reporting highlights broader criminal use of generative AI

    By February 2026, industry reporting described attackers using generative AI to scale social engineering, brand impersonation, CAPTCHA evasion, voice-biometrics attacks, and emerging attacks on AI agents and MCP-connected infrastructure. Experts noted that criminals were primarily using AI to automate language- and workflow-heavy tasks rather than consistently discovering novel vulnerabilities end-to-end.

  2. Feb 23, 2026

    Security guidance shifts toward MCP-aware and PQC-aware AI testing

    By February 2026, security guidance emphasized that traditional scanning and fuzzing were insufficient for stateful, tool-using Model Context Protocol environments and recommended testing full conversation flows, tool-calling logic, and data-exfiltration paths. The same guidance also called for validating post-quantum cryptography deployments against downgrade, fallback, and performance-failure scenarios.

  3. Feb 23, 2026

    Cisco reports low enterprise readiness for securing agentic AI

    Cisco's State of AI Security 2026 found that most organizations planned to deploy agentic AI into business functions, but only 29% said they were prepared to secure those deployments. The finding underscored a widening gap between adoption and security readiness.

  4. Feb 23, 2026

    Malicious GitHub issue via MCP server hijacks agent and exfiltrates data

    A documented attack showed that a malicious GitHub issue could embed hidden instructions delivered through a Model Context Protocol server, causing an AI agent to be hijacked and private repository data to be exfiltrated. The case highlighted indirect prompt injection risks in tool-connected agent environments.

  5. Jan 1, 2025

    Deepfake video call scam tricks Arup employee into large transfer

    In the Arup fraud case, a finance worker approved a fraudulent HK$200 million transfer after joining a videoconference that used deepfake impersonation of the company's UK-based CFO. The incident became a prominent real-world example of generative AI-enabled business fraud.

  6. Jan 1, 2025

    Prompt-injection and jailbreak attacks mature across AI models in 2025

    By 2025, prompt-injection and jailbreak techniques had advanced significantly, with multi-turn attacks reportedly achieving up to 92% success across eight open-weight models. This marked a broader escalation in practical offensive techniques against enterprise AI systems.

See the full picture in Mallory

Mallory subscribers get deeper analysis on every story, including:

Impact Assessment

Who’s affected and how

Technical Details

Deep-dive technical analysis

Response Recommendations

Actionable next steps for your team

Indicators of Compromise

IPs, domains, hashes, and more

AI Threads

Ask questions and take action on every story

Advanced Filters

Filter by topic, classification, timeframe

Scheduled Alerts

Get matching stories delivered automatically

Related Stories

Security Risks and Offensive Potential of Agentic AI and Automated Vulnerability Discovery

Security Risks and Offensive Potential of Agentic AI and Automated Vulnerability Discovery

Security leaders are warning that **AI agents are increasingly operating as “digital employees”** inside enterprise workflows—triaging alerts, coordinating investigations, and moving work across security tools—often with **broad permissions and limited governance**. The core risk highlighted is that organizations are deploying high-authority agents like plug-ins (reused service accounts, overbroad roles, weak oversight), creating fast-acting operators that can be manipulated and that lack the contextual judgment and policy awareness expected of human staff. Related commentary also raises concerns about **AI-to-AI communication** and “non-human-readable” behaviors that could reduce auditability and complicate investigations and control enforcement. In parallel, public examples show how quickly AI can accelerate **vulnerability discovery**: Microsoft Azure CTO Mark Russinovich reported using *Claude Opus 4.6* to decompile decades-old Apple II 6502 machine code and identify multiple issues, underscoring that similar techniques could be applied to **embedded/legacy firmware at scale**. Anthropic has also cautioned that advanced models can find high-severity flaws even in heavily tested codebases, reinforcing the likelihood that both defenders and attackers will leverage AI for faster bug-finding. Separate enterprise IT coverage notes that organizations are **reallocating budgets toward AI** by consolidating tools and renegotiating contracts, which can indirectly increase security exposure if cost-cutting reduces overlapping controls or if AI adoption outpaces governance and identity/access management maturity.

Yesterday
Security and Risk Implications of Agentic AI and AI-Generated Code in the Enterprise

Security and Risk Implications of Agentic AI and AI-Generated Code in the Enterprise

The rapid integration of agentic AI systems and AI-generated code into enterprise environments is fundamentally transforming business operations, productivity, and the cybersecurity landscape. AI agents are now embedded in daily workflows, automating tasks and augmenting human capabilities, but their lack of human intuition and ethical judgment introduces new attack surfaces and vulnerabilities. Security experts warn that the rush to deploy agentic AI—autonomous systems capable of executing complex, multistep tasks—without adequate governance or oversight is creating significant risks, including the "confused deputy" problem, where AI agents can be manipulated to misuse their privileges. The proliferation of AI-generated code further compounds these risks, as studies show a high prevalence of design flaws and security vulnerabilities in code produced by large language models, leading to increased technical debt and instability in software delivery. Organizations face mounting challenges in managing accountability and liability as AI systems act with greater autonomy. The lack of robust AI governance policies leaves enterprises exposed to breaches and regulatory risks, with a majority of organizations unprepared to manage the proliferation of "shadow AI." The surge in AI-driven web traffic is disrupting traditional business models in publishing and ecommerce, while adversaries exploit the gap between human and machine decision-making. Security leaders emphasize the need for human oversight, strong identity governance, and comprehensive risk management strategies to address the dual-front of human-AI business risk and to ensure that AI adoption does not outpace the organization’s ability to secure and govern these powerful new tools.

1 months ago
AI Adoption Outpacing Security Governance and Increasing Enterprise Risk Exposure

AI Adoption Outpacing Security Governance and Increasing Enterprise Risk Exposure

Enterprises’ rapid deployment of **AI and agentic AI** is increasingly creating measurable security and business risk, including direct exposure of sensitive personal data and downstream impacts on risk transfer. A widely cited example involved McDonald’s *McHire* applicant-screening platform (built by *Paradox.ai*), where researchers reported a trivial backend credential weakness (`123456` as both username and password) and no MFA, potentially exposing data tied to roughly **64 million** applicants; the incident is being used by insurers and risk teams as evidence that AI adoption is moving faster than security and governance, contributing to tighter cyber-insurance language, higher premiums, and **AI-related exclusions**. Separate reporting also highlighted that “plug-and-play” AI is unrealistic at enterprise scale, with organizations increasingly needing custom integration and operational ownership rather than relying on off-the-shelf tools. Threat reporting during the same period reinforced that AI is expanding both attacker capability and the attack surface: researchers described **Pakistan-linked APT36** using AI coding tools to generate high volumes of low-quality malware variants (including in less common languages) and to leverage legitimate cloud services for command-and-control, complicating detection. Additional research flagged **AI-themed browser extensions** (Chrome/Edge) that impersonate legitimate tools and can harvest LLM chat histories and browsing activity, underscoring the risk of “shadow AI” and unvetted add-ons. In parallel, routine threat-intelligence summaries continued to track major incidents (e.g., ransomware and data breaches) alongside AI-enabled tactics, indicating that AI risk is becoming intertwined with broader enterprise security exposure rather than remaining a standalone technology concern.

1 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed. Before adversaries strike.