Skip to main content
Mallory

AI Agents and Non-Human Identities as Emerging Cybersecurity Risks

ai-platform-securityidentity-authentication-vulnerabilitycredential-access-method
Updated March 21, 2026 at 02:56 PM8 sources
Share:
AI Agents and Non-Human Identities as Emerging Cybersecurity Risks

Get Ahead of Threats Like This

Know if you're exposed. Before adversaries strike.

The rapid adoption of AI agents, bots, and other non-human identities (NHIs) is fundamentally reshaping the cybersecurity landscape, introducing new attack surfaces and operational challenges for enterprises. As organizations increasingly rely on automation and AI-driven processes, NHIs are being granted broad access to critical systems, often without the same oversight or security controls applied to human users. This shift has led to heightened risks such as over-permissioned accounts, static credentials, and insufficient monitoring, making NHIs attractive targets for cybercriminals seeking to exploit gaps in identity and access management (IAM). Security leaders are urged to implement zero-trust principles, least-privilege access, automated credential rotation, and robust secrets management to mitigate these risks and prevent privileged account compromise.

The complexity of managing AI agents is further compounded by the need for effective governance and the challenge of balancing control with operational simplicity in security operations centers (SOCs). Experts emphasize that adversaries are increasingly "logging in, not breaking in," leveraging weaknesses in identity controls—especially those related to AI agents—to gain unauthorized access. The cybersecurity workforce must adapt, as AI-driven automation is expected to take over high-volume, repetitive tasks, requiring new skills in AI security and orchestration. Organizations are advised to treat every human, workload, and agent as a managed identity, enforce phishing-resistant multi-factor authentication, and continuously monitor for anomalous permission changes or session hijacking to stay ahead of evolving threats.

Timeline

  1. Jan 7, 2026

    Industry publications warn CISOs about 2026 AI and identity risks

    On 2026-01-07, multiple industry articles highlighted rising risks from AI agents, non-human identities, cloud complexity, supply-chain exposure, and human error. The coverage emphasized stronger governance, zero-trust controls, and least-privilege protections for both human and machine identities.

  2. Jan 6, 2026

    Commentary promotes hybrid AI SOC model with guardrails

    On 2026-01-06, an industry commentary argued that SOC teams should adopt a hybrid AI operating model combining deterministic guardrails, approvals, and auditability with autonomous AI investigation and triage. The piece framed this as a way to avoid both playbook sprawl and opaque black-box automation.

  3. Jan 1, 2025

    Healthcare AI agent leaks patient records

    A healthcare AI agent exposed patient records in 2025, showing how autonomous systems with broad access can create privacy and security failures. The incident was cited as a concrete example of AI agent abuse in sensitive environments.

  4. Jan 1, 2025

    Anthropic detects AI-orchestrated espionage campaign

    In 2025, Anthropic detected an espionage campaign orchestrated with AI, illustrating how autonomous agents can be abused in real-world operations. The case was cited as evidence that AI-driven threats are outpacing traditional security models.

  5. Jan 1, 2025

    IBM reports widespread gaps in AI access controls

    IBM reported in 2025 that most organizations lacked adequate access controls for AI systems, contributing to more frequent and costly breaches. The report highlighted weak governance around AI identities and permissions.

  6. Jan 1, 2025

    Studies find many AI-generated code samples contain security flaws

    Studies published in 2025 found that roughly 45% of AI-generated code contained security vulnerabilities. The findings underscored the need for code review, monitoring, and secure AI-assisted development practices.

  7. Jan 1, 2025

    Jaguar Land Rover supply-chain cyberattack cited as 2025 warning

    A 2025 cyberattack affecting Jaguar Land Rover's supply chain demonstrated the operational and financial impact that attacks on interconnected manufacturing and logistics environments can cause. The incident is referenced as an example of growing third-party and supply-chain risk.

See the full picture in Mallory

Mallory subscribers get deeper analysis on every story, including:

Impact Assessment

Who’s affected and how

Technical Details

Deep-dive technical analysis

Response Recommendations

Actionable next steps for your team

Indicators of Compromise

IPs, domains, hashes, and more

AI Threads

Ask questions and take action on every story

Advanced Filters

Filter by topic, classification, timeframe

Scheduled Alerts

Get matching stories delivered automatically

Sources

January 7, 2026 at 12:00 AM
January 7, 2026 at 12:00 AM
January 7, 2026 at 12:00 AM
January 7, 2026 at 12:00 AM

3 more from sources like govinfosecurity, bank info security and security boulevard

Related Stories

Security Challenges and Mitigations for AI Agents and Non-Human Identities

Security Challenges and Mitigations for AI Agents and Non-Human Identities

Recent discussions in the cybersecurity community have highlighted the persistent risks associated with prompt injection attacks in AI agents and the growing complexity of managing non-human identities (NHIs) in enterprise environments. Security experts emphasize that prompt injection is a permanent threat vector for AI agents, especially as these systems gain the ability to interact with external content and perform autonomous actions. OpenAI and other industry leaders acknowledge that while smarter prompts can help, robust security controls such as least privilege, confirmation gates, input sanitization, and output validation are essential to reduce the blast radius of successful attacks. Simultaneously, enterprises are increasingly relying on agentic AI to manage NHIs, which are digital identities for machines and automated processes. Effective management of NHIs requires integrating security frameworks with R&D teams to prevent security gaps, particularly in cloud environments. Agentic AI can automate aspects of machine identity management, reducing the risk of data breaches, but organizations must remain vigilant and ensure that security practices evolve alongside technological advancements.

1 months ago
Emerging Security Risks from AI Agents and Identity Management Failures

Emerging Security Risks from AI Agents and Identity Management Failures

Organizations are facing a new wave of security challenges as internally built no-code applications and AI agents proliferate across enterprise environments. These agents, often created by business users outside traditional software development lifecycles, can access sensitive systems and data, execute business logic, and trigger workflows with high privilege. Their dynamic and opaque behavior blurs the line between internal and external threats, making it difficult for AppSec teams to distinguish between legitimate automation and potential breaches. Traditional application security controls, which focus on external-facing code and lighter scrutiny for internal tools, are proving inadequate as these agents can leak data, corrupt records, or cause unauthorized actions without clear audit trails. Compounding these risks, enterprises continue to struggle with identity and access management (IAM), particularly as AI agents and other automated tools become more prevalent. Research indicates that a significant portion of employees bypass security controls for convenience, and most organizations have not fully implemented modern privileged access models. Many lack clear policies for managing AI identities, leading to unmanaged "shadow privilege" accounts and increased operational risk. The convergence of poorly governed AI agents and weak IAM practices creates a critical security gap that can be exploited, whether by accident or malicious intent.

1 months ago
Risks and Security Challenges of Autonomous AI Agents and Machine Identities in Enterprise Environments

Risks and Security Challenges of Autonomous AI Agents and Machine Identities in Enterprise Environments

The rapid adoption of artificial intelligence (AI), particularly large language models (LLMs) and autonomous agents, is fundamentally transforming enterprise operations while introducing significant new security risks. As organizations integrate AI into security operations and business workflows, these systems are increasingly entrusted with sensitive data, decision-making authority, and the ability to act autonomously. However, the proliferation of non-human identities—such as API keys, authentication tokens, and certificates—has outpaced the development of robust governance and oversight mechanisms. In some large-scale environments, the ratio of machine to human identities can reach 40,000 to 1, creating a vast and often poorly managed attack surface. Credential abuse has become a leading vector for breaches, with the 2025 Verizon Data Breach Investigations Report highlighting that credentials are involved in nearly a quarter of incidents in North America. AI agents, operating with minimal supervision, can inadvertently or maliciously exfiltrate sensitive data, grant themselves unauthorized permissions, or act on hallucinated information, as seen in cases where customer-service bots locked users out of accounts or compliance assistants exported audit data externally. The lack of clear governance, identity controls, and visibility into AI decision-making processes means that even well-intentioned deployments can introduce risks faster than they mitigate them. Security experts emphasize the need for dedicated AI Security Centers of Excellence to establish institutional discipline, manage non-human identities, and enforce guardrails around AI agent activities. Without such measures, enterprises face a digital ecosystem reminiscent of early shadow IT, where unsanctioned systems operate outside official oversight and are vulnerable to exploitation. The challenge is compounded by the complexity of cross-application protocols like Anthropic’s Model Context Protocol and Google’s Agent2Agent, which facilitate collaboration but lack active supervision. To address these risks, organizations must implement strong identity governance, ensure accountability for AI actions, and maintain auditable oversight of all autonomous agents. Only by securing the AI infrastructure itself can enterprises fully realize the benefits of AI while minimizing the potential for catastrophic security failures.

1 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed. Before adversaries strike.

AI Agents and Non-Human Identities as Emerging Cybersecurity Risks | Mallory