Skip to main content
Mallory

AI and Non-Human Identity Sprawl Expands IAM Attack Surface

ai-platform-securityidentity-authentication-vulnerabilityleaked-secret-api-keybuild-pipeline-compromise
Updated April 30, 2026 at 12:02 PM5 sources
Share:
AI and Non-Human Identity Sprawl Expands IAM Attack Surface

Get Ahead of Threats Like This

Know if you're exposed. Before adversaries strike.

Reporting and commentary warn that AI-driven non-human identities (NHIs) are rapidly increasing the number and turnover of credentials inside enterprise IAM programs, amplifying long-standing weaknesses such as credential sprawl, unclear ownership, and inconsistent lifecycle controls. The Cloud Security Alliance’s findings highlight that many organizations treat AI identities like traditional service accounts or API keys, causing them to inherit existing governance gaps while adding new scale and speed pressures as identities are created programmatically, distributed across environments, and used continuously.

CSO Online describes the operational drivers behind the surge—microservices, Kubernetes auto-scaling, CI/CD pipelines (e.g., GitHub Actions), and infrastructure-as-code (e.g., Terraform) generating large volumes of short-lived tokens and service principals—then argues that agentic AI further accelerates risk because these identities may be authorized to execute commands, move data, and change configurations autonomously. The net risk emphasized is that over-privileged AI agents and other NHIs can create breach conditions that may not resemble traditional intrusion, instead appearing as “normal” automated activity due to excessive permissions and weak visibility into post-authentication behavior.

Timeline

  1. Apr 30, 2026

    Anthropic withholds Mythos model after it finds thousands of vulnerabilities

    Anthropic reportedly decided not to publicly release its Mythos model after the system discovered thousands of previously unknown vulnerabilities in major operating systems and web browsers. The decision was cited as an example of the dual-use security risks posed by advanced AI agents.

  2. Feb 3, 2026

    Report says rapid AI agent adoption is creating an identity security crisis

    Reporting on the CSA findings, outlets said organizations are deploying autonomous AI agents without sufficient governance, creating many agentic identities with access to sensitive data and little oversight. The coverage emphasized a widening preparedness gap around AI identity threats and the risks posed by these poorly governed non-human identities.

  3. Feb 2, 2026

    Cloud Security Alliance report highlights AI identity governance weaknesses

    The Cloud Security Alliance published findings in "The State of Non-Human Identity and AI Security" showing that organizations often manage AI identities like other non-human identities, causing them to inherit weaknesses such as credential sprawl, unclear ownership, and inconsistent lifecycle controls. The report said AI systems continuously create and use identities across environments, outpacing legacy IAM tools and leaving security teams with poor visibility and slow revocation processes.

  4. Feb 2, 2026

    One Identity predicts a major breach tied to an over-privileged AI agent by 2026

    CSO Online cited a One Identity prediction that by 2026 a major breach would be traced to an over-privileged AI agent. The warning framed agentic AI as a growing identity risk because its actions may appear to be normal authorized system behavior.

  5. Feb 1, 2026

    Obsidian reports breaches tied to compromised machine identities

    Obsidian Security reported in February 2026 that many organizations had already suffered breaches linked to compromised machine identities such as service accounts, API keys, certificates, bots, and AI agents. The research also found that only a small minority had fully automated lifecycle management for these identities, underscoring operational security gaps.

See the full picture in Mallory

Mallory subscribers get deeper analysis on every story, including:

Impact Assessment

Who’s affected and how

Technical Details

Deep-dive technical analysis

Response Recommendations

Actionable next steps for your team

Indicators of Compromise

IPs, domains, hashes, and more

AI Threads

Ask questions and take action on every story

Advanced Filters

Filter by topic, classification, timeframe

Scheduled Alerts

Get matching stories delivered automatically

Related Stories

AI Agents and Non-Human Identities as Emerging Cybersecurity Risks

AI Agents and Non-Human Identities as Emerging Cybersecurity Risks

The rapid adoption of AI agents, bots, and other non-human identities (NHIs) is fundamentally reshaping the cybersecurity landscape, introducing new attack surfaces and operational challenges for enterprises. As organizations increasingly rely on automation and AI-driven processes, NHIs are being granted broad access to critical systems, often without the same oversight or security controls applied to human users. This shift has led to heightened risks such as over-permissioned accounts, static credentials, and insufficient monitoring, making NHIs attractive targets for cybercriminals seeking to exploit gaps in identity and access management (IAM). Security leaders are urged to implement zero-trust principles, least-privilege access, automated credential rotation, and robust secrets management to mitigate these risks and prevent privileged account compromise. The complexity of managing AI agents is further compounded by the need for effective governance and the challenge of balancing control with operational simplicity in security operations centers (SOCs). Experts emphasize that adversaries are increasingly "logging in, not breaking in," leveraging weaknesses in identity controls—especially those related to AI agents—to gain unauthorized access. The cybersecurity workforce must adapt, as AI-driven automation is expected to take over high-volume, repetitive tasks, requiring new skills in AI security and orchestration. Organizations are advised to treat every human, workload, and agent as a managed identity, enforce phishing-resistant multi-factor authentication, and continuously monitor for anomalous permission changes or session hijacking to stay ahead of evolving threats.

1 months ago
AI-Driven Risks and Identity Abuse in Modern Enterprise Security

AI-Driven Risks and Identity Abuse in Modern Enterprise Security

Recent analyses highlight that the most significant cybersecurity losses in 2025 stemmed from identity and OAuth token abuse, rather than high-profile zero-day vulnerabilities. Attackers leveraged AI to scale social engineering, phishing, and OAuth consent abuse, leading to widespread incidents across logistics, manufacturing, and other sectors. The rapid adoption of AI in enterprise environments has expanded the attack surface, with 99% of surveyed organizations experiencing at least one attack on their AI systems in the past year. The proliferation of GenAI-assisted coding has further outpaced security teams’ ability to secure production environments, compounding risk. Security leaders are increasingly concerned about the misalignment between teams, tools, and workflows, which exacerbates the impact of these AI-driven threats. Effective management of non-human identities (NHIs), such as machine credentials and tokens, is now critical, especially in cloud and SaaS environments. The need for robust governance, visibility, and context-aware controls is underscored by the growing sophistication of attacks targeting both human and machine identities. Organizations are urged to prioritize identity and secrets management, as well as to adapt their security strategies to address the evolving risks introduced by AI and automation.

1 months ago
Enterprise Concerns Over Securing Non-Human Identities

Enterprise Concerns Over Securing Non-Human Identities

Organizations are increasingly challenged by the rapid proliferation of non-human identities (NHIs), such as service accounts, API keys, digital certificates, access tokens, automated bots, IoT devices, and AI agents. More than half of enterprises surveyed express uncertainty about their ability to secure these NHIs, highlighting a significant gap between the adoption of automated digital identities and the maturity of tools and processes to protect them. The complexity and diversity of NHIs, which now form the backbone of modern digital infrastructure, have outpaced traditional identity and access management strategies, leaving organizations exposed to new risks. The exponential growth of NHIs, especially in cloud-native and automated environments, has led to a situation where non-human accounts vastly outnumber human users. This expansion, combined with issues like "secrets sprawl"—where credentials are scattered across codebases and pipelines—creates opportunities for account hijacking, privilege escalation, and lateral movement by threat actors. Security experts emphasize the need for unified visibility, consistent identity policies, and automated responses to address these risks, particularly as NHIs and AI agents become more integral to business operations and the attack surface continues to expand.

3 weeks ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed. Before adversaries strike.