Enterprise AI and Security Coverage Roundup (OpenAI–Pentagon Deal, AI Security Tools, and Governance Commentary)
Citizen Lab highlighted concerns about OpenAI’s Pentagon contract, noting expert skepticism that bulk user-data collection can be effectively ruled out and warning that feeding commercially available personal data into opaque AI systems can amplify harm through errors, bias, and weak accountability. Separately, CSO Online reported on OpenAI’s security-related initiatives, including a claim that Codex Security identified 11,000 “high-impact” bugs in a month and a report that OpenAI plans to acquire Promptfoo to strengthen AI agent security testing.
Most other items in the set are opinion/feature or promotional content rather than incident-driven threat intelligence: CIO and CSO Online ran general enterprise AI and security management pieces (e.g., “shadow AI” governance, identity decisioning, OT/IoT/zero trust challenges, cloud security culture/process issues, and pen-test automation lessons learned), while Red Canary published an RSAC 2026 session guide. One CSO Online headline referenced a critical HPE Aruba CX switch flaw enabling admin control without credentials, but the provided text does not include details sufficient to confirm it as the same story as the OpenAI items and it appears as a sidebar link rather than the primary subject of the referenced pages.
Timeline
Mar 12, 2026
Citizen Lab comments on OpenAI Pentagon contract concerns
In March 2026, Citizen Lab senior researcher Wolfie Christl said in comments to Forbes that an OpenAI deal with the Pentagon permits the gathering of bulk user data. He warned that feeding purchased personal data into opaque AI systems could amplify harms through errors, bias, and weak accountability, despite OpenAI CEO Sam Altman reportedly saying the deal would not enable mass surveillance.
Jan 16, 2023
Citizen Lab report on Iranian mobile network plans is dated
A Citizen Lab report analyzing the shared documents was dated January 16, 2023. The analysis said some communications involved representatives of Iran’s Communications Regulatory Authority.
Oct 1, 2022
Citizen Lab receives Iranian mobile network documents for analysis
The Intercept shared internal documents with Citizen Lab researchers in October 2022 for analysis. The materials described apparent plans to develop and launch an Iranian mobile network, including subscriber management and integration with a lawful intercept solution.
See the full picture in Mallory
Mallory subscribers get deeper analysis on every story, including:
Who’s affected and how
Deep-dive technical analysis
Actionable next steps for your team
IPs, domains, hashes, and more
Ask questions and take action on every story
Filter by topic, classification, timeframe
Get matching stories delivered automatically
Sources
Related Stories

AI-driven security discourse highlights bug-finding gains, identity risks, and largely generic guidance
Coverage this week emphasized how **AI is accelerating both offense and defense**, but most guidance remained high-level rather than tied to a single incident. The FBI warned that criminals and nation-states are using AI to increase the *speed* of intrusions while still following familiar kill-chain steps, urging organizations to double down on fundamentals such as MFA, hardening internet-facing/edge assets, and credential abuse detection; CISA leadership echoed the focus on removing unsupported edge devices. Separate reporting and commentary highlighted AI’s growing impact on software assurance: Microsoft Azure CTO Mark Russinovich described using Anthropic’s *Claude Opus 4.6* to analyze decades-old assembly code and surface subtle logic flaws, while open-source maintainers reported being inundated with low-quality, AI-generated vulnerability reports even as AI-assisted analysis can also increase discovery of high-severity bugs (e.g., Mozilla’s red-teaming claims). Several items were **notable but not part of a unified event**: CSO Online reported the **CVE program’s funding was secured**, reducing near-term continuity risk for vulnerability enumeration, and separately covered **post-quantum cryptography (PQC)** planning uncertainty as vendors compete for early advantage. Other pieces were primarily opinion, best-practice, or event content—e.g., “shadow AI” governance steps, SOC preparation for agentic AI, OT/IoT security commentary, cloud-security leadership takes, and a conference session roundup—providing general risk framing rather than actionable incident-specific intelligence. One concrete threat report described a **software supply-chain lure** in which developers searching for *OpenClaw* were redirected to a **GhostClaw RAT**, reinforcing ongoing risk from trojanized tooling and search-driven malware delivery, but it was not connected to the broader AI/governance narratives in the rest of the set.
1 months ago
AI Adoption and Governance Updates Across Industry and Government
Recent coverage focused on **AI adoption, governance, and societal impacts** rather than a discrete cybersecurity incident. OpenAI CEO **Sam Altman** argued that comparing AI energy use to human cognition is “unfair,” claiming the energy cost of “training a human” (years of living and food consumption plus evolutionary history) should be considered when judging AI efficiency, and separately warned that some companies are engaging in **“AI washing”**—attributing layoffs to AI as a pretext for workforce reductions—while also acknowledging real job displacement is likely to become more noticeable in the next few years. Enterprises and public-sector organizations highlighted practical AI rollouts and associated risk considerations. **Intel** introduced *Ask Intel*, a support assistant built on **Microsoft Copilot Studio**, alongside a shift away from public phone support toward web-based case handling, while noting response accuracy “cannot be guaranteed.” **Microsoft** removed a blog post that had described training LLMs using a Kaggle dataset derived from **pirated Harry Potter ebooks**, amid ongoing legal uncertainty around fair use and potential contributory infringement exposure. Separately, U.S. federal officials emphasized **targeted AI adoption** and expectation management (with the VA reporting hundreds of AI use cases), while other items included a hobbyist AI dashboard project shared on GitHub and a generic startup article on AI-accelerated MVP development—neither of which provided substantive security-relevant disclosures.
2 weeks ago
AI and cybersecurity: policy pressure, threat evolution, and market hype
Several items are **not a single coherent incident** but reflect a broader theme: the expanding role of **AI in national security and cybersecurity**. One report describes the US Department of Defense pressuring **Anthropic** to allow unrestricted military use of its *Claude* models, with reported threats to invoke the **Defense Production Act** or label the company a supply-chain risk if it does not remove safeguards; the same piece notes DoD interest in other models (including a reported deal involving *xAI Grok*) and frames the dispute around who sets rules for military AI use and what safety constraints should exist. Other references are largely **non-incident** content: leadership/board governance opinion pieces and a podcast segment arguing security should be treated as a business enabler, plus a venture-capital market write-up claiming 2025 cybersecurity investment surged as startups positioned themselves as **AI-native**. Only one additional item is clearly threat-focused: a CSO Online report on **Steaelite RAT**, described as combining **data theft** with **ransomware management** capabilities in a single tool. A separate Hackread article is generic “data breaches in 2026” advice/trend commentary without a specific breach, victim, or actionable technical detail.
1 months ago