Skip to main content
Mallory

Security Risks and Predictions for AI-Driven Systems and Operations

ai-platform-securityautonomous-system-securitybotnet-infrastructureembedded-device-vulnerabilitycloud-misconfiguration
Updated March 21, 2026 at 03:12 PM5 sources
Share:
Security Risks and Predictions for AI-Driven Systems and Operations

Get Ahead of Threats Like This

Know if you're exposed. Before adversaries strike.

Security professionals are raising concerns about the risks posed by the rapid integration of AI-driven systems, including humanoid robots, into mainstream society and enterprise environments. Experts warn that without robust security measures, these devices could become targets for botnet-style attacks, as demonstrated by a recent proof-of-concept hack exploiting multiple vulnerabilities in Unitree Robotics' humanoid robots. The potential for wormable attacks via Bluetooth Low Energy interfaces highlights the urgency for the industry to prioritize security in the design and deployment of these systems, with forecasts suggesting a significant new market for robot security solutions in the coming decade.

At the same time, the adoption of artificial intelligence is transforming security operations centers (SOCs) and cloud environments, with CISOs facing challenges in maintaining visibility, governance, and control as AI accelerates network growth and expands the attack surface. Industry reports and predictions for 2026 emphasize the need for responsible AI adoption, unified enterprise AI platforms, and enhanced security operations to manage the risks associated with distributed, automated, and interconnected systems. The convergence of AI innovation and security imperatives is driving organizations to rethink their strategies for both operational efficiency and threat mitigation.

Timeline

  1. Dec 9, 2025

    Security experts warn mainstream humanoid robot adoption raises cyber risk

    Researchers and security professionals warned that the growing deployment of AI-powered humanoid robots could create major new attack surfaces, including the possibility of physical botnets. The concerns were tied to accelerating adoption by robotics firms and automakers and projections of widespread future use.

  2. Dec 9, 2025

    Experts demonstrate proof-of-concept hack against Unitree robots

    A recent proof-of-concept attack showed Unitree humanoid robots could be compromised, highlighting issues such as hardcoded cryptographic keys and weak authentication in emerging robotic platforms. The demonstration provided concrete technical evidence of cyber risk in AI-powered humanoid robots.

See the full picture in Mallory

Mallory subscribers get deeper analysis on every story, including:

Impact Assessment

Who’s affected and how

Technical Details

Deep-dive technical analysis

Response Recommendations

Actionable next steps for your team

Indicators of Compromise

IPs, domains, hashes, and more

AI Threads

Ask questions and take action on every story

Advanced Filters

Filter by topic, classification, timeframe

Scheduled Alerts

Get matching stories delivered automatically

Sources

December 9, 2025 at 12:00 AM
December 9, 2025 at 12:00 AM
December 9, 2025 at 12:00 AM
December 9, 2025 at 12:00 AM

Related Stories

Enterprise Security Challenges and Risks from AI Adoption

Enterprise Security Challenges and Risks from AI Adoption

The rapid integration of artificial intelligence into enterprise operations is fundamentally altering the cybersecurity landscape. AI is now embedded in core business workflows, infrastructure, and decision-making processes, expanding the attack surface and introducing new exposure points in data, models, applications, and infrastructure. Security leaders are grappling with governance gaps, especially as agentic AI systems move from pilot to production, and are seeking new standards and controls to manage the risks of autonomous agents and application-to-application access. The need for robust data governance, updated identity and access management, and resilient infrastructure is driving a major IT transformation, with increased spending and a focus on AI-enabled security solutions. Industry experts and CISOs emphasize the importance of adapting security strategies to address the unique challenges posed by AI, including the concentration of sensitive data, the risk of model manipulation, and the complexity of AI-driven environments. Security vendors and analysts highlight the inadequacy of traditional security practices in the face of AI-driven threats, calling for the elimination of outdated controls and the adoption of new standards such as those proposed by Okta for managing OAuth permissions for AI agents. The evolving role of the CISO, the rise of zero trust as a business necessity, and the persistent importance of the human element in defense are recurring themes. Predictions for 2026 underscore the urgency for enterprises to refresh IT infrastructure, strengthen data governance, and prepare for a future where AI agents operate autonomously across interconnected systems, requiring continuous adaptation of security policies and controls to mitigate emerging risks.

1 months ago
AI-Driven Threats and Security Operations in 2025

AI-Driven Threats and Security Operations in 2025

The cybersecurity landscape in 2025 saw a significant evolution in both the use and abuse of artificial intelligence. Threat actors increasingly leveraged AI-powered tools, such as uncensored darknet assistants like DIG AI, to automate and scale malicious activities, including cybercrime, extremism, and privacy violations. Security researchers observed a surge in the adoption of "dark LLMs" and jailbroken AI chatbots, which lowered the barrier for cybercriminals and enabled more sophisticated attacks. At the same time, defenders began integrating generative AI and agentic systems into security operations centers (SOCs), with AI agents handling alert triage and detection tasks, but also introducing new risks related to trust, explainability, and operational complexity. Security leaders and experts highlighted the need for transparency, traceability, and risk-based prioritization in AI-powered SOC platforms, as well as the importance of addressing alert fatigue and ensuring that AI outputs are auditable. Looking ahead to 2026, the security of AI models and the potential for agentic AI to introduce insider risks are expected to become key challenges. The rapid adoption of AI in both offensive and defensive cyber operations underscores the urgency for organizations to adapt their security strategies, focusing on the unique risks and opportunities presented by AI technologies.

1 months ago
Escalation of AI-Enabled Cyberattacks and Defensive Strategies in Enterprise Security

Escalation of AI-Enabled Cyberattacks and Defensive Strategies in Enterprise Security

Security leaders across industries are increasingly concerned about the rapid evolution of AI-enabled cyberattacks, which are now among the top threats facing enterprises. Recent research highlights that cybercriminals are leveraging artificial intelligence to automate and enhance attack chains, including the use of deepfakes, automated phishing, and AI-generated malware. These AI-driven threats are capable of executing full attack sequences autonomously, from reconnaissance to data exfiltration, at speeds and scales previously unattainable by human operators. Security teams are responding by investing heavily in AI-powered defensive tools, aiming to accelerate detection, triage, and containment of threats. However, experts caution that AI should be used as a 'copilot' rather than an 'autopilot,' emphasizing the necessity of human oversight to ensure effective and responsible use of these technologies. The human element remains a critical vulnerability, as attackers use generative AI to craft highly convincing social engineering campaigns, including synthetic audio and video, which can bypass traditional awareness programs. The arms race between offensive and defensive AI is intensifying, with both sides seeking to outpace the other in sophistication and automation. Security leaders are also grappling with the challenge of integrating AI into their broader risk management and governance frameworks, ensuring that AI-driven solutions align with organizational policies and regulatory requirements. The expanding role of the CISO now includes oversight of AI risk, reflecting the technology's growing impact on enterprise security posture. As AI becomes more embedded in both attack and defense, organizations are re-evaluating their incident response strategies, workforce training, and investment priorities. The shift towards AI-driven security operations is not without challenges, including the risk of over-reliance on automation and the need for continuous adaptation to evolving threat tactics. Industry studies indicate that while AI can handle routine security tasks, complex and strategic decision-making still requires skilled human analysts. The ongoing development of AI in cybersecurity is reshaping the landscape, demanding new approaches to both technology deployment and leadership. Security teams are urged to balance innovation with caution, ensuring that AI augments rather than replaces critical human judgment. The future of enterprise security will likely be defined by the effectiveness of this human-AI partnership in countering increasingly sophisticated, AI-powered adversaries.

1 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed. Before adversaries strike.

Security Risks and Predictions for AI-Driven Systems and Operations | Mallory