Skip to main content
Mallory

Enterprise Risk From Unsanctioned and Over-Permissive AI Tooling

ai-platform-securityinsider-threat-incidentcloud-misconfigurationoperational-disruption
Updated March 21, 2026 at 02:50 PM2 sources
Share:
Enterprise Risk From Unsanctioned and Over-Permissive AI Tooling

Get Ahead of Threats Like This

Know if you're exposed. Before adversaries strike.

Security leaders are warning that rapid adoption of AI tools—often outside formal governance—creates expanding blind spots and increases the likelihood of data leakage and operational incidents. A webcast discussion framed “Shadow AIT” as the AI-era evolution of shadow IT, highlighting that AI capabilities are frequently embedded in everyday SaaS features and browser extensions, making it difficult for organizations to accurately inventory where AI is in use and what data is being shared. The panel cited a cautionary example involving Replit where insufficient controls around an AI agent reportedly contributed to a production database deletion, underscoring that agentic workflows can translate governance gaps into real outages.

Separately, reporting on Google Vertex AI raised concerns that permissions and access control design in AI platforms can amplify insider-risk scenarios if roles, entitlements, and auditability are not tightly managed—particularly where AI services can access or act on sensitive datasets. Commentary-style content also broadly discusses “cognitive AI” and future-facing architectures, but without tying to a specific incident or disclosure; the actionable takeaway across the relevant items is to treat AI enablement as an identity, data-governance, and monitoring problem (inventory AI usage, constrain permissions, and instrument logging) rather than a purely productivity tooling decision.

Timeline

  1. Jan 16, 2026

    Replit example cited where AI agent deleted a production database

    During the webcast, Chas Clawson referenced an incident at Replit in which a production database was deleted after control was effectively handed to an AI agent without sufficient safeguards. The example was used to illustrate the real-world operational risk of agentic AI systems lacking proper controls and accountability.

  2. Jan 16, 2026

    Webcast discusses rise of 'Shadow AIT' and related security risks

    Enterprise Security Weekly host Adrian Sanabria and Sumo Logic speakers Chas Clawson and David Girvin presented a webcast on 'Shadow AIT,' describing how unapproved or embedded AI tool use can create data leakage, operational, and security risks. The session recommended pragmatic governance, approved tool lists, threat modeling, and stronger logging and audit trails rather than blanket AI bans.

See the full picture in Mallory

Mallory subscribers get deeper analysis on every story, including:

Impact Assessment

Who’s affected and how

Technical Details

Deep-dive technical analysis

Response Recommendations

Actionable next steps for your team

Indicators of Compromise

IPs, domains, hashes, and more

AI Threads

Ask questions and take action on every story

Advanced Filters

Filter by topic, classification, timeframe

Scheduled Alerts

Get matching stories delivered automatically

Related Entities

Affected Products

Related Stories

Enterprise AI Security Risks Driven by Shadow AI Adoption and Rapid Exploitability

Enterprise AI Security Risks Driven by Shadow AI Adoption and Rapid Exploitability

Multiple reports highlighted escalating **enterprise AI security risk** driven by rapid adoption, weak governance, and widespread *shadow AI* use. Zscaler research reported that **90% of tested enterprise AI systems** had critical vulnerabilities discoverable in under 90 minutes, with a **median 16 minutes** to first critical failure, enabling fast data loss and defense bypass; the same reporting noted sharp growth in AI/ML activity across thousands of apps and rising corporate data transfers into AI tools such as *ChatGPT* and *Grammarly*. Separately, CSO Online reported that **roughly half of employees** use unsanctioned AI tools and that enterprise leaders are significant contributors, reinforcing the risk that sensitive data and workflows are being exposed outside approved controls. Governance and control gaps were further underscored by coverage of **NIST AI guidance** pushing organizations to expand cybersecurity risk management to AI systems, and by reporting on **AI infrastructure abuse** (criminals hijacking/reselling AI infrastructure) and **Hugging Face infrastructure** being abused to distribute an **Android RAT** at scale. Several other items in the set were not about enterprise AI risk specifically, including a **ShinyHunters vishing campaign**, **critical RCE flaws in the n8n automation platform**, an article on the **EU’s alternative to CVE** and potential fragmentation, a piece on a startup’s Linux security overhaul, and an opinion column on human risk management; these are separate topics and should not be treated as part of the same AI-risk story.

1 months ago
Shadow AI and the Risks of Unapproved AI Tool Adoption in Enterprises

Shadow AI and the Risks of Unapproved AI Tool Adoption in Enterprises

Organizations are facing a growing challenge as employees increasingly adopt AI tools and agents without formal IT approval, a phenomenon known as shadow AI. This unsanctioned use of AI—ranging from chatbots and large language models to low-code agents—enables employees to automate workflows and make decisions outside traditional governance structures. The lack of oversight and visibility into these autonomous systems exposes enterprises to significant risks, as sensitive data may be processed or shared through unvetted platforms, and decisions may be influenced by tools that operate beyond established security controls. Recent research highlights that 73% of employees use AI for work, yet over a third do not consistently follow company policies, and many are unaware of existing guidelines. About 27% admit to using unapproved AI tools, often browser-based and free, making them difficult for IT to monitor. This shadow AI trend compounds the broader issue of shadow IT and SaaS sprawl, where employees bypass official channels to access tools that better meet their needs. Security teams are advised to shift from outright bans to strategies focused on discovery, communication, and oversight to manage these risks effectively.

1 months ago
Security Risks and Offensive Potential of Agentic AI and Automated Vulnerability Discovery

Security Risks and Offensive Potential of Agentic AI and Automated Vulnerability Discovery

Security leaders are warning that **AI agents are increasingly operating as “digital employees”** inside enterprise workflows—triaging alerts, coordinating investigations, and moving work across security tools—often with **broad permissions and limited governance**. The core risk highlighted is that organizations are deploying high-authority agents like plug-ins (reused service accounts, overbroad roles, weak oversight), creating fast-acting operators that can be manipulated and that lack the contextual judgment and policy awareness expected of human staff. Related commentary also raises concerns about **AI-to-AI communication** and “non-human-readable” behaviors that could reduce auditability and complicate investigations and control enforcement. In parallel, public examples show how quickly AI can accelerate **vulnerability discovery**: Microsoft Azure CTO Mark Russinovich reported using *Claude Opus 4.6* to decompile decades-old Apple II 6502 machine code and identify multiple issues, underscoring that similar techniques could be applied to **embedded/legacy firmware at scale**. Anthropic has also cautioned that advanced models can find high-severity flaws even in heavily tested codebases, reinforcing the likelihood that both defenders and attackers will leverage AI for faster bug-finding. Separate enterprise IT coverage notes that organizations are **reallocating budgets toward AI** by consolidating tools and renegotiating contracts, which can indirectly increase security exposure if cost-cutting reduces overlapping controls or if AI adoption outpaces governance and identity/access management maturity.

2 days ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed. Before adversaries strike.

Enterprise Risk From Unsanctioned and Over-Permissive AI Tooling | Mallory