Skip to main content
Mallory

AI Adoption Outpacing Security Governance and Increasing Enterprise Risk Exposure

ai-platform-securityai-enabled-threat-activitydefault-credential-exposureextension-plugin-hijackcommand-and-control-method
Updated March 21, 2026 at 05:51 AM3 sources
Share:
AI Adoption Outpacing Security Governance and Increasing Enterprise Risk Exposure

Get Ahead of Threats Like This

Know if you're exposed. Before adversaries strike.

Enterprises’ rapid deployment of AI and agentic AI is increasingly creating measurable security and business risk, including direct exposure of sensitive personal data and downstream impacts on risk transfer. A widely cited example involved McDonald’s McHire applicant-screening platform (built by Paradox.ai), where researchers reported a trivial backend credential weakness (123456 as both username and password) and no MFA, potentially exposing data tied to roughly 64 million applicants; the incident is being used by insurers and risk teams as evidence that AI adoption is moving faster than security and governance, contributing to tighter cyber-insurance language, higher premiums, and AI-related exclusions. Separate reporting also highlighted that “plug-and-play” AI is unrealistic at enterprise scale, with organizations increasingly needing custom integration and operational ownership rather than relying on off-the-shelf tools.

Threat reporting during the same period reinforced that AI is expanding both attacker capability and the attack surface: researchers described Pakistan-linked APT36 using AI coding tools to generate high volumes of low-quality malware variants (including in less common languages) and to leverage legitimate cloud services for command-and-control, complicating detection. Additional research flagged AI-themed browser extensions (Chrome/Edge) that impersonate legitimate tools and can harvest LLM chat histories and browsing activity, underscoring the risk of “shadow AI” and unvetted add-ons. In parallel, routine threat-intelligence summaries continued to track major incidents (e.g., ransomware and data breaches) alongside AI-enabled tactics, indicating that AI risk is becoming intertwined with broader enterprise security exposure rather than remaining a standalone technology concern.

Timeline

  1. Mar 12, 2026

    CSO Online highlighted AI security failures affecting cyber insurance costs

    CSO Online reported that AI adoption is outpacing security and governance, citing the McHire exposure and IBM data showing some organizations have already experienced breaches involving AI models or applications. The article said cyber insurers are responding with tighter policy language, higher premiums, and AI-related exclusions.

  2. Mar 11, 2026

    BankInfoSecurity reported Cognizant findings on enterprise AI integration gaps

    BankInfoSecurity reported that Cognizant’s research found enterprises increasingly favor AI builders and flexible engagement models over off-the-shelf AI products, while facing regulatory, ROI, data, talent, and legacy-system barriers. The report emphasized that meaningful AI value requires substantial integration work rather than “plug-and-play” deployment.

  3. Mar 9, 2026

    Check Point published threat bulletin on multiple breaches and AI-related threats

    Check Point Research released a threat intelligence report summarizing multiple confirmed incidents, including breaches affecting AkzoNobel, LexisNexis, the Wikimedia Foundation, and TriZetto Provider Solutions, along with several AI-related threat campaigns and newly highlighted vulnerabilities. The bulletin also noted patches for CVE-2026-0628, CVE-2026-1492, CVE-2026-22719, and CVE-2026-21385, with active exploitation reported for the Qualcomm flaw.

  4. Nov 1, 2025

    Cognizant and Avasta surveyed enterprise AI decision-makers

    In November 2025, Cognizant and Avasta conducted research involving 600 AI decision-makers and interviews with 38 senior executives about enterprise AI adoption, integration challenges, and operating models. The findings later informed reporting that many organizations remain in a “messy middle” of AI deployment.

  5. Jul 1, 2025

    McHire backend exposed applicant data through weak default credentials

    In July 2025, security researchers Ian Carroll and Sam Curry found that McDonald’s McHire recruiting platform backend accepted “123456” as both username and password and lacked multi-factor authentication. The weakness put personal data from roughly 64 million job applicants at risk, and the researchers notified the company.

See the full picture in Mallory

Mallory subscribers get deeper analysis on every story, including:

Impact Assessment

Who’s affected and how

Technical Details

Deep-dive technical analysis

Response Recommendations

Actionable next steps for your team

Indicators of Compromise

IPs, domains, hashes, and more

AI Threads

Ask questions and take action on every story

Advanced Filters

Filter by topic, classification, timeframe

Scheduled Alerts

Get matching stories delivered automatically

Sources

Related Stories

Enterprise AI Security Risks Driven by Shadow AI Adoption and Rapid Exploitability

Enterprise AI Security Risks Driven by Shadow AI Adoption and Rapid Exploitability

Multiple reports highlighted escalating **enterprise AI security risk** driven by rapid adoption, weak governance, and widespread *shadow AI* use. Zscaler research reported that **90% of tested enterprise AI systems** had critical vulnerabilities discoverable in under 90 minutes, with a **median 16 minutes** to first critical failure, enabling fast data loss and defense bypass; the same reporting noted sharp growth in AI/ML activity across thousands of apps and rising corporate data transfers into AI tools such as *ChatGPT* and *Grammarly*. Separately, CSO Online reported that **roughly half of employees** use unsanctioned AI tools and that enterprise leaders are significant contributors, reinforcing the risk that sensitive data and workflows are being exposed outside approved controls. Governance and control gaps were further underscored by coverage of **NIST AI guidance** pushing organizations to expand cybersecurity risk management to AI systems, and by reporting on **AI infrastructure abuse** (criminals hijacking/reselling AI infrastructure) and **Hugging Face infrastructure** being abused to distribute an **Android RAT** at scale. Several other items in the set were not about enterprise AI risk specifically, including a **ShinyHunters vishing campaign**, **critical RCE flaws in the n8n automation platform**, an article on the **EU’s alternative to CVE** and potential fragmentation, a piece on a startup’s Linux security overhaul, and an opinion column on human risk management; these are separate topics and should not be treated as part of the same AI-risk story.

1 months ago
AI Adoption and Misuse Expands Enterprise and Cybercrime Risk

AI Adoption and Misuse Expands Enterprise and Cybercrime Risk

No single incident ties the reporting together; the dominant theme is **AI’s expanding role in both enterprise operations and criminal tradecraft**, alongside broader, non-AI security trend commentary. A Docker-sponsored survey reported by *Help Net Security* says **60% of organizations run AI agents in production**, but **security/compliance is the top scaling barrier (40%)**, with recurring concerns including *prompt injection*, *tool poisoning*, runtime isolation/sandboxing, auditability, and credential/access control in distributed agent systems. Separately, forum-traffic research summarized by *Help Net Security* found cybercriminals increasingly using mainstream and local AI models to support phishing, code generation, and social engineering, with frequent discussion of jailbreaking and the use of stolen/resold premium AI accounts. Several other items are adjacent but not about the same specific story: an ESET article provides **generic guidance** on detecting **AI voice deepfakes** used for fraud; an Ars Technica piece covers **copyright/data memorization** risks in LLMs; and multiple outlets publish broader security trend or opinion content (quantum preparedness, ransomware targeting manufacturing, Romanian warnings about ransomware aligning with Russian hybrid aims, ATM jackpotting increases, and a Check Point retrospective). Some entries are primarily **commentary, historical analogy, newsletters, or how-to recon guidance** rather than new threat reporting, and should be treated as lower-signal for executive situational awareness unless your organization is actively deploying agentic AI or tracking AI-enabled fraud/social engineering.

1 months ago
AI’s Impact on Secure Coding, Security Operations, and Workforce Strain

AI’s Impact on Secure Coding, Security Operations, and Workforce Strain

Security leaders and practitioners are increasingly framing **AI** as both a force-multiplier for defenders and a risk amplifier for software and operations. Commentary and executive guidance highlighted that AI-assisted fuzzing, static analysis, and large-scale pattern recognition can surface vulnerabilities faster than traditional review, but that faster discovery does not automatically reduce enterprise risk because real-world impact depends on exposure, identity/privilege design, data flows, and business process dependencies. Separately, industry guidance on “rolling out AI” emphasized practical governance measures—knowledge-sharing, partnering, and automation—arguing that the same capabilities that make AI valuable also expand the attack surface and the speed at which threats evolve. Operational reporting also underscored how AI-related and traditional threats are converging in day-to-day security work. A monthly security briefing cited rapid weaponization of a critical BeyondTrust Remote Support pre-auth RCE (**CVE-2026-1731**) with proof-of-concept and exploitation observed shortly after disclosure, later treated as a zero-day and reportedly used in ransomware activity; it also noted emerging integrity risks such as **AI recommendation poisoning** (manipulating AI-generated outputs via hidden instructions) and an AI tooling supply-chain incident involving an unintended update to the *Cline CLI* coding assistant after a compromised token. In parallel, survey results pointed to sustained **workforce burnout**—U.S. security professionals averaging significant weekly overtime and reporting emotional exhaustion—while also indicating a skills shift toward communication and stakeholder management as AI tooling adoption increases cross-functional demands.

Yesterday

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed. Before adversaries strike.

AI Adoption Outpacing Security Governance and Increasing Enterprise Risk Exposure | Mallory