Skip to main content
Mallory

CVE-2026-26133 Cross-Prompt Injection in Microsoft 365 Copilot Email Summarization

ai-platform-securitywidely-deployed-product-advisorydetection-content-updatephishing-campaign-intelligence
Updated March 21, 2026 at 05:51 AM2 sources
Share:
CVE-2026-26133 Cross-Prompt Injection in Microsoft 365 Copilot Email Summarization

Get Ahead of Threats Like This

Know if you're exposed. Before adversaries strike.

Researchers at Permiso Security disclosed a cross-prompt injection weakness in Microsoft 365 Copilot email/Teams summarization features, tracked as CVE-2026-26133, that could let attackers embed instruction-like text inside a normal email and influence Copilot’s generated summary. The reported impact is the ability to produce attacker-authored, convincing phishing content inside Copilot’s trusted summarization UI—without attachments, macros, or traditional exploit code—by exploiting a trust-boundary failure where the model treats untrusted email content as instructions. Microsoft confirmed the issue and rolled out mitigations and a patch across affected surfaces, crediting Andi Ahmeti for the discovery.

In parallel, Microsoft published operational guidance on detecting and responding to prompt abuse in AI tools, emphasizing that prompt injection/abuse is a leading LLM application risk (aligned with OWASP guidance) and that detection is difficult without strong logging and telemetry. The guidance describes common prompt-abuse patterns (including indirect prompt injection) and provides a practical playbook for investigation and response. A separate Praetorian post provides general AI security best practices (e.g., input validation, monitoring, and human oversight) but does not add incident-specific details about CVE-2026-26133.

Timeline

  1. Mar 12, 2026

    Microsoft publishes guidance on detecting prompt abuse in AI tools

    Microsoft released a security blog outlining detection and response practices for prompt abuse in AI applications, including indirect prompt injection. The guidance included a scenario where hidden instructions in a URL fragment could be ingested by an AI summarizer and distort outputs without code execution.

  2. Mar 12, 2026

    Permiso Security discloses Copilot phishing and data-exposure technique

    Permiso Security disclosed technical details showing that instruction-like text embedded in normal emails could manipulate Microsoft 365 Copilot summaries into presenting convincing phishing content. The research also found Teams Copilot particularly susceptible and warned that injected prompts could steer Copilot to include internal Microsoft 365 context in attacker-controlled links.

  3. Mar 12, 2026

    Microsoft publishes CVE-2026-26133 and credits researcher

    Microsoft publicly published CVE-2026-26133 and credited researcher Andi Ahmeti for the finding. Public disclosure described the flaw as a cross-prompt injection issue affecting Copilot email and Teams summarization behavior.

  4. Mar 11, 2026

    Microsoft completes patching across affected Copilot surfaces

    Microsoft finished patching the affected Microsoft 365 Copilot surfaces for the summarization vulnerability. This marked the completion of the mitigation rollout for the issue later published as CVE-2026-26133.

  5. Feb 17, 2026

    Microsoft begins rolling out mitigations for CVE-2026-26133

    Microsoft started deploying mitigations for the Copilot summarization flaw across affected surfaces. The fixes targeted prompt-injection abuse that could manipulate summaries and potentially pull internal Microsoft 365 context into attacker-controlled content.

  6. Jan 28, 2026

    Microsoft confirms Copilot summarization vulnerability

    Microsoft confirmed a cross-prompt injection vulnerability affecting Microsoft 365 Copilot email summarization, later tracked as CVE-2026-26133. The issue allowed attacker-crafted email content to influence Copilot-generated summaries and phishing-style output.

See the full picture in Mallory

Mallory subscribers get deeper analysis on every story, including:

Impact Assessment

Who’s affected and how

Technical Details

Deep-dive technical analysis

Response Recommendations

Actionable next steps for your team

Indicators of Compromise

IPs, domains, hashes, and more

AI Threads

Ask questions and take action on every story

Advanced Filters

Filter by topic, classification, timeframe

Scheduled Alerts

Get matching stories delivered automatically

Related Stories

Microsoft Copilot Security Research: Prompt-Injection Phishing Risk and Copilot Studio Audit-Logging Gaps

Microsoft Copilot Security Research: Prompt-Injection Phishing Risk and Copilot Studio Audit-Logging Gaps

Security researchers reported two distinct Microsoft Copilot-related risks: (1) **cross prompt injection** against *Microsoft Copilot* email summarization surfaces that can cause attacker-supplied text in an email to be treated like instructions, shaping the summary into a convincing in-product “security alert” and creating a phishing path that does not rely on attachments or macros; and (2) **audit-logging gaps in Microsoft Copilot Studio** where certain administrative actions for Copilot Studio agents (e.g., around sharing, authentication, logging, and publication) were not consistently recorded in Microsoft 365’s Unified Audit Log, potentially reducing defenders’ ability to detect malicious or unauthorized agent changes. Permiso described how Copilot’s behavior varies across Outlook’s inline *Summarize* experience, the Outlook Copilot pane/add-in, and Teams-based summarization, with the core risk being **trust transfer**—users may treat Copilot output as system-generated even when it is attacker-influenced—and warned that retrieval across Microsoft 365 (Teams/OneDrive/SharePoint) could amplify impact if chained. Datadog Security Labs stated it reported Copilot Studio logging issues to **MSRC**, that Microsoft remediated logging for the affected events by **October 5, 2025**, and that Datadog later observed a **regression** where some events again failed to log consistently, which it also reported to Microsoft.

1 months ago
Reprompt One-Click Prompt-Injection Chain Bypassing Microsoft Copilot Guardrails

Reprompt One-Click Prompt-Injection Chain Bypassing Microsoft Copilot Guardrails

Varonis Threat Labs disclosed a prompt-injection attack chain dubbed **Reprompt** that enabled one-click data theft from **Microsoft Copilot** by abusing how Copilot accepts prompts via a URL. The technique relied on the `q` URL parameter to auto-populate and execute attacker-supplied instructions when a victim clicked a crafted Copilot link, requiring no plugins, connectors, or additional user-entered prompts. Researchers reported the method could expose sensitive information previously available in the Copilot session, including **PII**, and could continue exfiltration even after the Copilot chat window was closed. The reported attack flow chained multiple techniques to bypass Copilot’s protections, including **Parameter-to-Prompt (P2P) injection** via the `q` parameter and a **double-request bypass** in which safeguards applied to an initial request but could be defeated by forcing Copilot to repeat the task, leading to disclosure on a subsequent attempt. Varonis also described **chain-request exfiltration** to maintain covert control of the session and progressively extract data. Reporting indicates Microsoft took action in response to the research, though the core risk highlighted is that URL-triggered prompt execution and multi-step request chaining can undermine AI assistant guardrails if not consistently enforced across requests and session states.

1 months ago
Prompt-injection RCE risks in agentic AI tools with OS and browser automation

Prompt-injection RCE risks in agentic AI tools with OS and browser automation

Security researchers and CERT/CC reporting highlighted **critical prompt-injection-to-execution paths** in agentic AI systems where untrusted content can be interpreted as instructions and then executed via connected tools. In *ModelScope MS-Agent*, **CVE-2026-2256** (CVSS 9.8) was reported as a **command injection / RCE** issue tied to the framework’s “Shell tool,” where external input is not properly sanitized before being passed to OS command execution; a `check_safe()` denylist-based filter was described as bypassable via obfuscation/alternate syntax, enabling arbitrary command execution and potential full host compromise. Separate research from **Zenity Labs** described a broader class of **agentic AI browser** weaknesses (including Perplexity’s *Comet*) where attackers can hijack autonomous workflows using indirect prompt injection delivered through normal channels such as a **calendar invite**; prior to patches, this could drive the browser to access local files, read directories/files, and exfiltrate data, and in some cases leverage the agent’s existing authenticated context to interact with sensitive services (including password managers). A similar execution-model risk was reported in *Langflow*’s CSV Agent as **CVE-2026-27966** (CVSS 10.0), where `allow_dangerous_code=True` was hardcoded, enabling LangChain’s `python_repl_ast` tool and allowing remote attackers with chat access to coerce **server-side code execution** and full system compromise via prompt injection.

1 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed. Before adversaries strike.