Skip to main content
Mallory

Security Risks from Unmanaged AI and Citizen Developer Automation in Enterprises

ai-platform-securityunmanaged-asset-discoverycloud-misconfigurationleaked-secret-api-keystandards-framework-update
Updated March 21, 2026 at 03:24 PM6 sources
Share:
Security Risks from Unmanaged AI and Citizen Developer Automation in Enterprises

Get Ahead of Threats Like This

Know if you're exposed. Before adversaries strike.

The rapid adoption of AI tools and no-code/low-code platforms by business users, often referred to as 'citizen developers,' is creating significant blind spots in enterprise security. Organizations are seeing a surge in applications and automations built outside traditional IT oversight, leading to vulnerabilities such as hardcoded credentials, injection attacks, and unauthorized data access. Security teams are struggling to maintain visibility and control, as the number of these shadow applications can far exceed those developed by IT professionals. This trend is compounded by the widespread use of unapproved AI tools—so-called 'Shadow AI'—with studies showing that over 80% of employees have used such tools, and regular use is highest among executives. The lack of clear corporate AI policies and the confidence of employees in their own risk assessments further exacerbate the problem, increasing the risk of data exposure and compliance failures.

Industry reports and expert commentary highlight that as AI and automation become standard in business operations, the attack surface for cybercriminals expands. Small and medium enterprises are particularly vulnerable, with a notable percentage experiencing financial or operational losses due to cyber incidents. Security experts recommend that organizations respond by automating security oversight, updating policies, and providing actionable guidance to users. Proactive measures, such as implementing frameworks like the Essential Eight and deploying monitoring solutions, are essential to mitigate the risks associated with the democratization of development and the proliferation of unsanctioned AI tools in the workplace.

Timeline

  1. Nov 14, 2025

    Coverage highlights shadow IT and shadow AI risks across industries

    Reporting on shadow IT and shadow AI described their spread across sectors including healthcare, insurance, banking, airlines, and utilities, emphasizing compliance, visibility, and attack-surface risks. The coverage also cited IDC and IEEE research showing large portions of enterprise technology use occur outside IT oversight.

  2. Nov 13, 2025

    UpGuard report finds widespread employee shadow AI use

    An UpGuard report found that more than 80% of employees had used unauthorized AI tools and about half used them regularly. The study also found especially high regular use among executives, marketing, and sales staff, along with weak awareness of corporate AI policies.

  3. Jan 1, 2025

    IBM report links Shadow AI breaches to higher costs

    IBM's 2025 Cost of a Data Breach Report found that 20% of breaches were linked to unauthorized AI use and that incidents involving Shadow AI cost an average of $670,000 more than other breaches. This established a measurable business impact for unsanctioned AI use in organizations.

See the full picture in Mallory

Mallory subscribers get deeper analysis on every story, including:

Impact Assessment

Who’s affected and how

Technical Details

Deep-dive technical analysis

Response Recommendations

Actionable next steps for your team

Indicators of Compromise

IPs, domains, hashes, and more

AI Threads

Ask questions and take action on every story

Advanced Filters

Filter by topic, classification, timeframe

Scheduled Alerts

Get matching stories delivered automatically

Sources

November 14, 2025 at 12:00 AM
November 14, 2025 at 12:00 AM

1 more from sources like scworld

Related Stories

Shadow AI and the Risks of Unapproved AI Tool Adoption in Enterprises

Shadow AI and the Risks of Unapproved AI Tool Adoption in Enterprises

Organizations are facing a growing challenge as employees increasingly adopt AI tools and agents without formal IT approval, a phenomenon known as shadow AI. This unsanctioned use of AI—ranging from chatbots and large language models to low-code agents—enables employees to automate workflows and make decisions outside traditional governance structures. The lack of oversight and visibility into these autonomous systems exposes enterprises to significant risks, as sensitive data may be processed or shared through unvetted platforms, and decisions may be influenced by tools that operate beyond established security controls. Recent research highlights that 73% of employees use AI for work, yet over a third do not consistently follow company policies, and many are unaware of existing guidelines. About 27% admit to using unapproved AI tools, often browser-based and free, making them difficult for IT to monitor. This shadow AI trend compounds the broader issue of shadow IT and SaaS sprawl, where employees bypass official channels to access tools that better meet their needs. Security teams are advised to shift from outright bans to strategies focused on discovery, communication, and oversight to manage these risks effectively.

1 months ago
Emerging Security Risks from AI Agents and Identity Management Failures

Emerging Security Risks from AI Agents and Identity Management Failures

Organizations are facing a new wave of security challenges as internally built no-code applications and AI agents proliferate across enterprise environments. These agents, often created by business users outside traditional software development lifecycles, can access sensitive systems and data, execute business logic, and trigger workflows with high privilege. Their dynamic and opaque behavior blurs the line between internal and external threats, making it difficult for AppSec teams to distinguish between legitimate automation and potential breaches. Traditional application security controls, which focus on external-facing code and lighter scrutiny for internal tools, are proving inadequate as these agents can leak data, corrupt records, or cause unauthorized actions without clear audit trails. Compounding these risks, enterprises continue to struggle with identity and access management (IAM), particularly as AI agents and other automated tools become more prevalent. Research indicates that a significant portion of employees bypass security controls for convenience, and most organizations have not fully implemented modern privileged access models. Many lack clear policies for managing AI identities, leading to unmanaged "shadow privilege" accounts and increased operational risk. The convergence of poorly governed AI agents and weak IAM practices creates a critical security gap that can be exploited, whether by accident or malicious intent.

1 months ago
AI-Driven Software Development and Security Risks in the Enterprise

AI-Driven Software Development and Security Risks in the Enterprise

Organizations are rapidly integrating AI into software development pipelines, with AI-generated code now present in every surveyed environment and a significant portion of codebases produced by AI tools. Security leaders report increased risk due to limited visibility into where and how AI is used, the proliferation of shadow AI, and the introduction of logic flaws or insecure patterns by autonomous agents. The lack of oversight and formal controls over AI-generated code and tools has expanded the attack surface, making product security and supply chain integrity top priorities for 2026. Industry experts emphasize the need for responsible adoption of AI-driven security tools, highlighting the importance of evaluation, deployment, and governance to maintain control and transparency. New frameworks, such as the AI Vulnerability Scoring System (AIVSS), are being developed to address the unique, non-deterministic risks posed by agentic and autonomous AI systems, which traditional models like CVSS cannot adequately capture. The shift to runtime application security and the management of non-human identities further underscore the evolving landscape, as organizations seek to balance innovation with robust security practices.

1 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed. Before adversaries strike.

Security Risks from Unmanaged AI and Citizen Developer Automation in Enterprises | Mallory