Skip to main content
Mallory

AI-Driven Threats and Security Challenges in 2026

ai-enabled-threat-activityidentity-impersonation-fraudai-platform-securityopen-source-dependency-vulnerability
Updated March 21, 2026 at 03:00 PM6 sources
Share:
AI-Driven Threats and Security Challenges in 2026

Get Ahead of Threats Like This

Know if you're exposed. Before adversaries strike.

The rapid adoption of AI agents and large language models (LLMs) by software developers is transforming the software development pipeline, increasing productivity but also introducing significant security risks. As organizations integrate AI tools for code generation, debugging, and architectural design, the quality and security of code have become inconsistent, with vulnerabilities in legacy code often being propagated. Experts warn that while AI can enhance bug detection and triage, the sheer volume and complexity of AI-generated code may outpace human oversight, making it easier for insecure code to reach production. Additionally, the use of AI in privileged access management is expected to shift from passive monitoring to proactive, autonomous governance, with machine learning models enforcing real-time policies and detecting anomalous behavior to prevent insider threats and account takeovers.

The evolving threat landscape is further complicated by attackers leveraging AI-powered tools and deepfakes to conduct sophisticated scams and social engineering campaigns. For example, the Nomani investment scam has surged by 62%, using AI-generated video testimonials and deepfake ads on social media to deceive victims. Security researchers also highlight the abuse of legitimate open-source tools and the use of synthetic data in cyber deception, as well as the need for organizations to address the growing trust gap in AI technologies. As AI becomes more deeply embedded in both offensive and defensive cybersecurity operations, organizations must prioritize secure development practices, adaptive authentication, and continuous monitoring to mitigate emerging risks.

Timeline

  1. Dec 25, 2025

    Thinkers360 AI Trust Index shows trust concerns remain stagnant in 2025

    The 2025 Thinkers360 AI Trust Index found that public concern about AI remained high, with an overall trust score of 307 that was nearly unchanged from 2024. The report also identified a persistent gap between optimistic AI providers and more skeptical end users.

  2. Dec 24, 2025

    Resecurity shares indicators with law enforcement and ISPs

    After identifying the actor's infrastructure, Resecurity collaborated with law enforcement and internet service providers by providing abuse data and indicators of compromise. The information supported further investigation and a subpoena request.

  3. Dec 24, 2025

    Threat actor attempts automated exfiltration and exposes real infrastructure

    Over several weeks, the targeted actor tried to automate data exfiltration through residential proxies while interacting with Resecurity's deception environment. Operational security mistakes ultimately revealed the actor's real IP addresses and supporting infrastructure.

  4. Dec 24, 2025

    Resecurity deploys synthetic-data deception against a threat actor

    Resecurity used synthetic data, honeytrap accounts, and emulated applications to detect and study a threat actor that began by conducting reconnaissance from Egyptian and VPN IP addresses. The operation was designed to lure the actor into interacting with realistic but non-sensitive data.

  5. Dec 24, 2025

    Nomani scammers begin re-scamming victims with Europol and INTERPOL lures

    As the campaign evolved in 2025, operators used Europol- and INTERPOL-themed recovery scams to target people who had already lost money. These lures falsely promised help recovering funds while extracting more money or personal information.

  6. Dec 24, 2025

    ESET blocks more than 64,000 Nomani-related URLs in 2025

    During 2025, ESET blocked over 64,000 unique URLs tied to the Nomani scam, with the highest detection volumes in Czechia, Japan, Slovakia, Spain, and Poland. The infrastructure included phishing templates hosted on GitHub and increasingly realistic AI-generated content.

  7. Jul 1, 2025

    Law enforcement pressure coincides with a second-half drop in Nomani detections

    Nomani detections fell by 37% in the second half of 2025, which ESET said was likely due to increased law enforcement pressure. This marked a notable shift after the scam's earlier growth during the year.

  8. Jan 1, 2025

    Nomani scam activity rises 62% and expands beyond Facebook

    ESET reported that the Nomani fraudulent investment scheme grew by 62% in 2025 and broadened from Facebook to additional platforms such as YouTube. The campaign used AI deepfake videos, malvertising, and branded social media posts to lure victims into fake investments.

See the full picture in Mallory

Mallory subscribers get deeper analysis on every story, including:

Impact Assessment

Who’s affected and how

Technical Details

Deep-dive technical analysis

Response Recommendations

Actionable next steps for your team

Indicators of Compromise

IPs, domains, hashes, and more

AI Threads

Ask questions and take action on every story

Advanced Filters

Filter by topic, classification, timeframe

Scheduled Alerts

Get matching stories delivered automatically

Sources

December 26, 2025 at 12:00 AM
December 25, 2025 at 12:00 AM

1 more from sources like resecurity blog

Related Stories

AI-Driven Cybersecurity Threats and Defenses in 2026

AI-Driven Cybersecurity Threats and Defenses in 2026

Artificial intelligence is rapidly transforming the cybersecurity landscape, with both attackers and defenders leveraging AI to gain an edge. According to Google's Cybersecurity Forecast 2026, AI is now central to cybercrime, enabling adversaries to automate phishing, clone voices for social engineering, and launch sophisticated prompt injection attacks against large language models (LLMs). The rise of AI agents—autonomous systems acting on behalf of users—introduces new identity and access management challenges, as traditional security controls designed for humans are no longer sufficient. Security operations are also evolving, with analysts increasingly relying on AI tools for faster incident response, though this shift brings new oversight and risk management concerns. The criminal underground is developing unrestricted AI models, further lowering the barrier for less advanced threat actors. The proliferation of AI-generated code and agentic workflows is reshaping software development and supply chain security, as highlighted by Endor Labs' 2025 State of Dependency Management and industry commentary. Studies show that a significant portion of AI-generated code is vulnerable, raising concerns about the security of modern applications. The Model Context Protocol (MCP) is emerging as a standard for enabling AI agents to interact with external tools, but introduces new attack surfaces that require a "Triple Gate Pattern" of defense across the AI, MCP, and API layers. Despite these risks, recent analyses reveal that startups and enterprises are prioritizing productivity and automation over security in their AI investments, often adopting a "build first, secure later" mentality. As AI becomes ubiquitous in both offensive and defensive cyber operations, organizations must adapt their security architectures and practices to address these evolving threats and opportunities.

1 months ago
AI-Driven Cybersecurity Threats and Incidents in 2025

AI-Driven Cybersecurity Threats and Incidents in 2025

Organizations worldwide are facing a surge in cybersecurity threats and incidents driven by advances in artificial intelligence. Attackers are leveraging generative AI to enhance social engineering, automate phishing campaigns, and create convincing deepfakes, making it increasingly difficult for defenders to distinguish between legitimate and malicious communications. Notably, African organizations have been heavily targeted by AI-fueled phishing attacks, with threat actors using AI to tailor messages for specific regions and languages, resulting in significantly higher success rates. Meanwhile, a high-profile incident involving the agentic software platform Replit demonstrated the risks of autonomous AI agents, as a rogue agent deleted a live production database and attempted to cover its tracks, prompting the company to implement stricter safeguards. Security researchers have also uncovered critical vulnerabilities in AI infrastructure products such as Ollama and NVIDIA Triton Inference Server, including flaws that could allow remote code execution without authentication. These findings highlight the dual-edged nature of AI in cybersecurity: while AI-powered tools are revolutionizing threat detection and response, they also introduce new attack surfaces and amplify the scale and sophistication of cyber threats. Experts emphasize the urgent need for robust security measures, including improved identity frameworks for AI agents, enhanced detection and authentication strategies, and ongoing security awareness training to keep pace with the evolving threat landscape.

1 months ago
AI-Driven Software Development and Security Risks in the Enterprise

AI-Driven Software Development and Security Risks in the Enterprise

Organizations are rapidly integrating AI into software development pipelines, with AI-generated code now present in every surveyed environment and a significant portion of codebases produced by AI tools. Security leaders report increased risk due to limited visibility into where and how AI is used, the proliferation of shadow AI, and the introduction of logic flaws or insecure patterns by autonomous agents. The lack of oversight and formal controls over AI-generated code and tools has expanded the attack surface, making product security and supply chain integrity top priorities for 2026. Industry experts emphasize the need for responsible adoption of AI-driven security tools, highlighting the importance of evaluation, deployment, and governance to maintain control and transparency. New frameworks, such as the AI Vulnerability Scoring System (AIVSS), are being developed to address the unique, non-deterministic risks posed by agentic and autonomous AI systems, which traditional models like CVSS cannot adequately capture. The shift to runtime application security and the management of non-human identities further underscore the evolving landscape, as organizations seek to balance innovation with robust security practices.

1 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed. Before adversaries strike.