AI Agent Adoption Outpacing Safety and Governance Controls
Organizations are rapidly expanding the use of AI agents—systems that can execute multi-step tasks with limited human supervision—while governance, safety, and oversight controls lag behind. Deloitte’s State of AI in the Enterprise survey of 3,200+ business leaders across 24 countries reported 23% of companies already using AI agents “at least moderately,” projected to rise to 74% within two years, while only about 21% said they have robust safety and oversight mechanisms in place. Separately, commentary warning about AI-enabled intrusion acceleration cited a purported “GTG-1002” campaign in which AI agents allegedly automated most of the intrusion lifecycle and compressed response windows, arguing that traditional SOC processes struggle against autonomous, high-velocity adversary tradecraft.
Multiple other items in the set focus on broader responsible AI and policy concerns rather than a single security incident: an interview-style piece describes how “responsible AI” functions inside a large vendor’s product process, and another report highlights expert concerns about deploying LLM tools in law enforcement workflows (e.g., summarizing body camera transcripts or generating crime scene photo descriptions) given risks like hallucinations and bias. A separate business-leadership article frames cybersecurity and AI as strategic imperatives amid geopolitical instability but does not provide incident-specific or vulnerability-specific details. Overall, the material is best characterized as governance and risk posture coverage around agentic AI rather than a unified, verifiable breach or vulnerability disclosure.
Timeline
Jan 22, 2026
Davos 2026 spotlights prompt injection and AI agent security risks
At the World Economic Forum's 56th Annual Meeting in Davos in January 2026, cybersecurity leaders discussed how AI agents introduce new risks as they are integrated into business operations. Speakers emphasized prompt injection as a major threat and recommended controls such as zero trust, least privilege, and guard agents to monitor AI behavior.
Jan 21, 2026
Deloitte report finds AI agent deployment outpacing safety controls
By January 2026, Deloitte's latest State of AI in the Enterprise findings were published, reporting that 23% of companies already use AI agents at least moderately and that figure is expected to rise to 74% within two years. Only about 21% of respondents said they had robust safety and oversight mechanisms, prompting recommendations for autonomy limits, real-time monitoring, and audit trails.
Nov 1, 2025
Deloitte surveys global enterprises on AI adoption and governance
In late 2025, Deloitte conducted a global survey of more than 3,200 business and IT leaders across 24 countries on enterprise AI use, including agentic AI adoption, governance, and operational challenges. The survey found access to AI tools expanding quickly while production deployment, safety controls, and governance maturity lagged.
Sep 15, 2025
Chinese group allegedly launches AI-driven GTG-1002 intrusion campaign
In mid-September 2025, an alleged Chinese state-sponsored operation dubbed GTG-1002 reportedly used AI agents to autonomously perform 80–90% of the intrusion lifecycle against about 30 technology, finance, and government entities. The campaign was described as using a commercially available jailbroken model, highlighting the potential for similar capabilities to spread beyond state actors.
See the full picture in Mallory
Mallory subscribers get deeper analysis on every story, including:
Who’s affected and how
Deep-dive technical analysis
Actionable next steps for your team
IPs, domains, hashes, and more
Ask questions and take action on every story
Filter by topic, classification, timeframe
Get matching stories delivered automatically
Related Entities
Organizations
Affected Products
Sources
Related Stories

AI Adoption and Governance Concerns Amid Emerging Agentic-AI Security Risks
Organizations are accelerating adoption of **generative and agentic AI**, but reporting indicates governance, data readiness, and workforce skills are lagging. A survey of chief data officers cited widespread use of genAI in large enterprises and growing plans to increase **data management** investment, while also flagging that visibility and governance have not kept pace with expanding AI usage and that many employees need upskilling in **data** and **AI literacy** to use AI outputs responsibly. Separately, commentary and reporting highlighted a widening set of AI-related security and societal risks, including concerns about **deepfakes**, privacy, and opaque model behavior, alongside claims of real-world exploitation activity targeting AI-adjacent developer workflows (for example, token theft via compromised automation such as GitHub Actions) and discussion of vulnerabilities affecting AI tooling and agent communication patterns. Other items in the set were primarily newsletter/personal updates or vendor-style announcements and did not provide a single, verifiable incident narrative beyond general AI-and-security trend coverage.
1 months ago
AI Adoption and Misuse Expands Enterprise and Cybercrime Risk
No single incident ties the reporting together; the dominant theme is **AI’s expanding role in both enterprise operations and criminal tradecraft**, alongside broader, non-AI security trend commentary. A Docker-sponsored survey reported by *Help Net Security* says **60% of organizations run AI agents in production**, but **security/compliance is the top scaling barrier (40%)**, with recurring concerns including *prompt injection*, *tool poisoning*, runtime isolation/sandboxing, auditability, and credential/access control in distributed agent systems. Separately, forum-traffic research summarized by *Help Net Security* found cybercriminals increasingly using mainstream and local AI models to support phishing, code generation, and social engineering, with frequent discussion of jailbreaking and the use of stolen/resold premium AI accounts. Several other items are adjacent but not about the same specific story: an ESET article provides **generic guidance** on detecting **AI voice deepfakes** used for fraud; an Ars Technica piece covers **copyright/data memorization** risks in LLMs; and multiple outlets publish broader security trend or opinion content (quantum preparedness, ransomware targeting manufacturing, Romanian warnings about ransomware aligning with Russian hybrid aims, ATM jackpotting increases, and a Check Point retrospective). Some entries are primarily **commentary, historical analogy, newsletters, or how-to recon guidance** rather than new threat reporting, and should be treated as lower-signal for executive situational awareness unless your organization is actively deploying agentic AI or tracking AI-enabled fraud/social engineering.
1 months ago
AI Adoption Outpacing Security Governance and Increasing Enterprise Risk Exposure
Enterprises’ rapid deployment of **AI and agentic AI** is increasingly creating measurable security and business risk, including direct exposure of sensitive personal data and downstream impacts on risk transfer. A widely cited example involved McDonald’s *McHire* applicant-screening platform (built by *Paradox.ai*), where researchers reported a trivial backend credential weakness (`123456` as both username and password) and no MFA, potentially exposing data tied to roughly **64 million** applicants; the incident is being used by insurers and risk teams as evidence that AI adoption is moving faster than security and governance, contributing to tighter cyber-insurance language, higher premiums, and **AI-related exclusions**. Separate reporting also highlighted that “plug-and-play” AI is unrealistic at enterprise scale, with organizations increasingly needing custom integration and operational ownership rather than relying on off-the-shelf tools. Threat reporting during the same period reinforced that AI is expanding both attacker capability and the attack surface: researchers described **Pakistan-linked APT36** using AI coding tools to generate high volumes of low-quality malware variants (including in less common languages) and to leverage legitimate cloud services for command-and-control, complicating detection. Additional research flagged **AI-themed browser extensions** (Chrome/Edge) that impersonate legitimate tools and can harvest LLM chat histories and browsing activity, underscoring the risk of “shadow AI” and unvetted add-ons. In parallel, routine threat-intelligence summaries continued to track major incidents (e.g., ransomware and data breaches) alongside AI-enabled tactics, indicating that AI risk is becoming intertwined with broader enterprise security exposure rather than remaining a standalone technology concern.
1 months ago