Skip to main content
Mallory

Emerging AI-Driven Cybersecurity Threats and Exploits

ai-enabled-threat-activityai-platform-securitypackage-repository-poisoninginitial-access-methodindustrial-control-system-vulnerability
Updated March 21, 2026 at 03:09 PM7 sources
Share:
Emerging AI-Driven Cybersecurity Threats and Exploits

Get Ahead of Threats Like This

Know if you're exposed. Before adversaries strike.

Recent research and threat intelligence highlight the growing risks posed by advanced AI models in the cybersecurity landscape. Studies demonstrate that state-of-the-art AI agents, such as Claude Opus 4.5 and GPT-5, are now capable of autonomously exploiting smart contracts, uncovering zero-day vulnerabilities, and generating real-world economic harm. OpenAI has publicly acknowledged the dual-use nature of its models, warning that future iterations may reach 'high' cybersecurity risk levels, with the potential to develop working zero-day exploits and assist in complex intrusion operations. These developments underscore the urgent need for proactive defensive measures and the adoption of AI for security as well as offense.

In parallel, threat actors are leveraging AI to orchestrate sophisticated supply chain attacks, as seen in the PyStoreRAT campaign, which used AI-generated GitHub projects to target IT and OSINT professionals with stealthy malware. Security experts and industry leaders are raising concerns about the expanding attack surface, including the exploitation of antiquated systems and shadow APIs by agentic AI, and the challenges of integrating AI into operational technology environments. The convergence of AI capabilities with cyber offense and defense is rapidly reshaping the threat landscape, demanding new strategies for risk management, governance, and technical controls.

Timeline

  1. Dec 11, 2025

    CISA, NSA, and ACSC issue joint advisory on AI in OT

    A joint advisory from CISA, NSA, and the Australian Cyber Security Centre warned that integrating AI into operational technology environments creates major security, governance, and data privacy challenges, and outlined four principles for safer adoption.

  2. Dec 11, 2025

    OpenAI introduces Aardvark vulnerability research agent

    OpenAI announced Aardvark, an AI security researcher agent intended to help defenders identify and patch vulnerabilities in codebases as part of its defensive cybersecurity investments.

  3. Dec 11, 2025

    OpenAI launches cyber safeguards and governance measures

    To limit malicious use of increasingly capable models, OpenAI said it is implementing access controls, monitoring, red teaming, threat intelligence and insider-risk efforts, a trusted access program, and a new Frontier Risk Council.

  4. Dec 11, 2025

    OpenAI prepares for models reaching 'high' cyber risk

    OpenAI said it is evaluating upcoming models as if they may be capable of developing zero-day remote exploits or assisting stealthy intrusions, and is preparing safeguards under its Preparedness Framework.

  5. Dec 11, 2025

    Morphisec identifies PyStoreRAT supply-chain malware campaign

    Morphisec Threat Labs reported a campaign using dormant GitHub accounts and AI-generated project content to target IT administrators, cybersecurity analysts, and OSINT professionals with a new malware family called PyStoreRAT.

  6. Dec 11, 2025

    AI agents discover two zero-day smart contract vulnerabilities

    In simulated testing against 2,849 recently deployed contracts with no known vulnerabilities, AI agents identified two novel zero-days and generated exploits valued at $3,694, demonstrating feasible autonomous offensive capability.

  7. Dec 11, 2025

    AI agents exploit known smart contract flaws in benchmark testing

    Research showed models including Claude Opus 4.5, Claude Sonnet 4.5, and GPT-5 could autonomously develop exploits against vulnerable smart contracts in SCONE-bench, with combined exploit value reported at $4.6 million.

  8. Dec 11, 2025

    Researchers compile SCONE-bench from 2020-2025 smart contract exploits

    A new benchmark, SCONE-bench, was created using 405 smart contracts exploited between 2020 and 2025 to evaluate whether advanced AI agents can autonomously find and exploit blockchain vulnerabilities.

See the full picture in Mallory

Mallory subscribers get deeper analysis on every story, including:

Impact Assessment

Who’s affected and how

Technical Details

Deep-dive technical analysis

Response Recommendations

Actionable next steps for your team

Indicators of Compromise

IPs, domains, hashes, and more

AI Threads

Ask questions and take action on every story

Advanced Filters

Filter by topic, classification, timeframe

Scheduled Alerts

Get matching stories delivered automatically

Sources

schneier on security
AIs Exploiting Smart Contracts
December 11, 2025 at 12:06 PM
December 11, 2025 at 12:00 AM

2 more from sources like dark reading and securitysenses blog

Related Stories

AI-Driven Cybersecurity Threats and Risk Management in Modern Enterprises

AI-Driven Cybersecurity Threats and Risk Management in Modern Enterprises

Enterprises are facing a rapidly evolving threat landscape as artificial intelligence (AI) technologies become deeply integrated into business operations and cybercriminal toolkits. Security leaders emphasize that effective threat modeling for AI systems requires segmenting the stack by function, data sensitivity, and business impact, rather than treating all AI as a monolithic risk. The rise of agentic AI—autonomous systems capable of executing complex tasks—has introduced unprecedented risks, with many such solutions deployed without IT or security oversight. The OWASP Top 10 for Agentic AI provides a practical framework for CISOs to identify, communicate, and mitigate these new risks, highlighting the urgent need for tailored security strategies and stakeholder education. Recent incidents underscore the real-world impact of AI-enabled attacks. Notably, Chinese hackers successfully jailbroke Anthropic's Claude AI model, leveraging it to automate and accelerate a global cyberespionage campaign targeting over 30 organizations. This event demonstrates that AI can be weaponized to execute sophisticated attacks at scale, outpacing current defensive and regulatory measures. Security experts and policymakers are calling for accelerated safety testing of AI models, stricter export controls on high-performance chips, and the adoption of AI-driven defensive tools to counter these emerging threats. The convergence of advanced AI capabilities and cybercrime highlights the critical need for proactive, context-aware security practices in the age of intelligent automation.

1 months ago
AI-Driven Security Risks, Bypasses, and Exploits in Modern Cybersecurity

AI-Driven Security Risks, Bypasses, and Exploits in Modern Cybersecurity

Security researchers and industry experts are raising alarms about the growing use of artificial intelligence (AI) in both offensive and defensive cybersecurity operations. Attackers are leveraging AI to bypass advanced security controls, as demonstrated by a researcher who used AI to defeat an "AI-powered" web application firewall, and by the emergence of new malware that exploits AI model files and browser vulnerabilities to evade detection and exfiltrate credentials. Meanwhile, defenders are grappling with the proliferation of unsanctioned AI tools in the workplace, the challenge of auditing AI decision-making, and the surge in AI-powered bug hunting, which has led to a dramatic increase in vulnerability discoveries and bug bounty payouts. The risks are compounded by the lack of clear AI usage policies, the potential for data leaks through generative AI tools, and the difficulty in monitoring or controlling how sensitive information is processed and stored by these systems. Industry reports highlight that a significant portion of employees use unauthorized AI applications, often exposing sensitive data without IT oversight, and that prompt injection and model manipulation are now common vulnerability types. The security community is also debating the extent to which ransomware and other attacks are truly "AI-driven," with some reports criticized for overstating the role of AI in current threat activity. As organizations rush to adopt AI for efficiency and innovation, experts urge the implementation of robust governance, continuous monitoring, and red-teaming to anticipate and mitigate the evolving risks posed by both sanctioned and shadow AI systems. The rapid evolution of AI in cybersecurity is forcing a reevaluation of traditional defense models, emphasizing the need for transparency, operational oversight, and adaptive security strategies.

1 months ago
Emerging Risks and Opportunities of AI in Cybersecurity and Cybercrime

Emerging Risks and Opportunities of AI in Cybersecurity and Cybercrime

Artificial intelligence is rapidly transforming both the offensive and defensive sides of cybersecurity. Security researchers and industry experts warn that while AI, especially agentic AI, is not yet widely used by cybercriminals, its adoption is expected to accelerate as state-sponsored groups pioneer its use and demonstrate its effectiveness. Agentic AI, which enables autonomous action without human intervention, could automate complex attack chains and make cybercrime more efficient, raising concerns about a new wave of AI-aided ransomware and other threats. At the same time, defenders are increasingly leveraging AI to monitor vast amounts of data, detect anomalies, and respond to threats at unprecedented speed and scale. However, the dual-use nature of AI means attackers are also using it to craft convincing phishing emails, create deepfakes, and evade detection. Challenges such as data poisoning, false positives, and the risk of over-reliance on AI systems highlight the need for careful oversight and innovation from human analysts. The cybersecurity workforce, especially new entrants, must adapt to a landscape where AI augments both attack and defense, emphasizing creativity and critical thinking over routine tasks.

1 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed. Before adversaries strike.