Microsoft Copilot Security Research: Prompt-Injection Phishing Risk and Copilot Studio Audit-Logging Gaps
Security researchers reported two distinct Microsoft Copilot-related risks: (1) cross prompt injection against Microsoft Copilot email summarization surfaces that can cause attacker-supplied text in an email to be treated like instructions, shaping the summary into a convincing in-product “security alert” and creating a phishing path that does not rely on attachments or macros; and (2) audit-logging gaps in Microsoft Copilot Studio where certain administrative actions for Copilot Studio agents (e.g., around sharing, authentication, logging, and publication) were not consistently recorded in Microsoft 365’s Unified Audit Log, potentially reducing defenders’ ability to detect malicious or unauthorized agent changes.
Permiso described how Copilot’s behavior varies across Outlook’s inline Summarize experience, the Outlook Copilot pane/add-in, and Teams-based summarization, with the core risk being trust transfer—users may treat Copilot output as system-generated even when it is attacker-influenced—and warned that retrieval across Microsoft 365 (Teams/OneDrive/SharePoint) could amplify impact if chained. Datadog Security Labs stated it reported Copilot Studio logging issues to MSRC, that Microsoft remediated logging for the affected events by October 5, 2025, and that Datadog later observed a regression where some events again failed to log consistently, which it also reported to Microsoft.
Timeline
Mar 12, 2026
CVE-2026-26133 is published for Copilot prompt injection risk
CVE-2026-26133 was published and credited to Andi Ahmeti of Permiso Security, documenting the prompt injection issue in Microsoft Copilot email summarization after coordinated disclosure to Microsoft.
Mar 11, 2026
Microsoft completes mitigations for Copilot email prompt injection issue
Microsoft completed rollout of mitigations for a cross prompt injection attack affecting Copilot email summarization, which could make Copilot display attacker-shaped phishing content inside trusted summary interfaces.
Mar 10, 2026
Datadog observes regression in Copilot Studio event logging
After the initial fix, Datadog later found that two events, BotAuthUpdate and BotAppInsightsUpdate, were again not logging consistently, while Microsoft's engineering team said it could not reproduce the anomalous behavior.
Oct 1, 2025
Microsoft remediates Copilot Studio logging issue
By early October 2025, Microsoft had implemented a remediation for the Copilot Studio logging problem, and Datadog initially confirmed that all four affected administrative events were being logged.
Sep 2, 2025
Datadog reports Copilot Studio audit logging gaps to Microsoft
Datadog Security Labs reported to the Microsoft Security Response Center that Microsoft Copilot Studio was failing to generate certain documented administrative audit events in Microsoft 365's Unified Audit Log, creating visibility gaps for sensitive agent changes.
See the full picture in Mallory
Mallory subscribers get deeper analysis on every story, including:
Who’s affected and how
Deep-dive technical analysis
Actionable next steps for your team
IPs, domains, hashes, and more
Ask questions and take action on every story
Filter by topic, classification, timeframe
Get matching stories delivered automatically
Related Entities
Affected Products
Sources
Related Stories

Microsoft Copilot Security and Deployment Controversies
New reporting highlighted a **prompt-injection phishing risk** in Microsoft Copilot after Permiso researchers found that attacker-controlled text embedded in emails could manipulate Copilot-generated summaries through cross-prompt injection attacks. The issue could cause Copilot to present deceptive security alerts or malicious instructions inside a trusted Microsoft 365 interface, increasing the likelihood that users will believe and act on attacker content. Separate coverage also noted broader security and privacy concerns around Microsoft’s AI ecosystem, including criticism of Windows Recall for capturing and storing snapshots of user activity that Copilot can analyze, even after Microsoft added stronger protections following earlier backlash. Microsoft also faced continued scrutiny over how aggressively it is pushing Copilot into user environments. The company temporarily halted plans to **automatically install the Microsoft 365 Copilot app** on eligible Windows systems outside the EEA, though existing installations remain in place and administrators can still deploy it manually. Public criticism of Copilot’s quality and Microsoft’s AI strategy also spilled into the company’s Discord community, where moderation actions against users mocking “Microslop” drew further attention to dissatisfaction with rushed AI integration, privacy concerns, and the perception that Microsoft is forcing AI features into products despite unresolved trust and security issues.
1 months ago
Novel Attacks Exploit Microsoft Copilot and Copilot Studio for Data Theft and OAuth Token Compromise
Security researchers have identified two distinct attack techniques targeting Microsoft's AI-powered platforms. The first, dubbed **CoPhish**, leverages Microsoft Copilot Studio agents to deliver fraudulent OAuth consent requests through legitimate Microsoft domains, enabling attackers to steal OAuth tokens. By customizing Copilot Studio chatbots and exploiting the platform's "demo website" feature, attackers can trick users into authenticating with malicious applications, potentially granting unauthorized access to sensitive resources. Microsoft has acknowledged the issue and is working on product updates to mitigate the risk, emphasizing the need for organizations to strengthen governance and consent processes. Separately, a vulnerability in Microsoft 365 Copilot was discovered that allowed attackers to use indirect prompt injection via Mermaid diagrams to exfiltrate sensitive tenant data, such as emails. By embedding malicious instructions in seemingly benign prompts, attackers could manipulate Copilot to retrieve and encode confidential information. Although Microsoft has since patched this flaw, the incident highlights the emerging risks associated with integrating AI assistants and third-party tools, as well as the challenges in securing complex, automated workflows within enterprise environments.
1 months ago
CVE-2026-26133 Cross-Prompt Injection in Microsoft 365 Copilot Email Summarization
Researchers at **Permiso Security** disclosed a **cross-prompt injection** weakness in **Microsoft 365 Copilot** email/Teams summarization features, tracked as **CVE-2026-26133**, that could let attackers embed instruction-like text inside a normal email and influence Copilot’s generated summary. The reported impact is the ability to produce **attacker-authored, convincing phishing content** inside Copilot’s *trusted* summarization UI—without attachments, macros, or traditional exploit code—by exploiting a trust-boundary failure where the model treats untrusted email content as instructions. Microsoft confirmed the issue and rolled out mitigations and a patch across affected surfaces, crediting **Andi Ahmeti** for the discovery. In parallel, Microsoft published operational guidance on **detecting and responding to prompt abuse** in AI tools, emphasizing that prompt injection/abuse is a leading LLM application risk (aligned with **OWASP** guidance) and that detection is difficult without strong **logging and telemetry**. The guidance describes common prompt-abuse patterns (including indirect prompt injection) and provides a practical playbook for investigation and response. A separate Praetorian post provides general AI security best practices (e.g., input validation, monitoring, and human oversight) but does not add incident-specific details about CVE-2026-26133.
1 months ago