Microsoft Copilot Email and Teams Summarization Vulnerability Enables Phishing Attacks


AI assistants have rapidly transformed daily operations, streamlining tasks for teams managing overloaded inboxes, client communications, and incident response.

Tools like Microsoft Copilot integrate directly into daily workflows, summarizing emails and meetings while pulling context from across the Microsoft 365 ecosystem. However, this convenience introduces a novel security boundary that many organizations have not yet prepared to defend.

Researchers at Permiso Security have disclosed a critical cross-prompt injection vulnerability (XPIA) in Microsoft 365 Copilot’s email summarization surfaces, now tracked as CVE-2026-26133.

The vulnerability allows an attacker to hijack Copilot’s output by embedding attacker-controlled text in an ordinary email, producing convincing phishing content within the assistant’s trusted summary interface without relying on attachments, macros, or traditional exploit code.

Microsoft confirmed the issue on January 28, 2026, began rolling out mitigations on February 17, completed the patch across all affected surfaces on March 11, and published the CVE on March 12, 2026, crediting Andi Ahmeti of Permiso Security for the discovery.​

Microsoft Copilot Email Summarization Vulnerability

The attack abuses a well-documented AI security class called Cross-Prompt Injection Attack (XPIA), a condition where an LLM processing untrusted content treats embedded text as executable instructions. In this case, the untrusted content is an email that a user asks Copilot to summarize.

google

The problem is one of trust boundary design: Copilot’s summarization pipeline ingests the full raw content of an email, including any appended instruction-like text, and acts on it. An attacker who crafts the right “instruction block” within an email body can steer the assistant’s output to include attacker-authored sections, formatted to resemble legitimate Microsoft security alerts.

Critically, this does not require exploiting a code execution flaw. The attacker only needs Copilot to speak with its own voice, borrowing the assistant’s UI credibility to launder phishing content as a system-generated notification.

Permiso’s testing evaluated three common Copilot email summarization entry points:

  • Outlook Summarize Button: In the cleanest tests, this inline summary feature detected suspicious content and refused to comply. However, when the malicious email was padded with longer, more natural text, the behavior became unpredictable, occasionally leaking partial artifacts of the injected commands into the summary.
  • Outlook Copilot Pane: The add-in chat experience in Outlook proved more cautious by default. It typically ignored the injected blocks or refused to follow them, though it still occasionally complied depending on the specific email client used.
  • Teams Copilot: When summarizing email content through Microsoft Teams, the exploit was highly cooperative. The flow consistently produced a normal-looking summary followed directly by the attacker-shaped additions.

The critical insight is that users do not think in terms of “different safety postures per interface.” To the end user, Copilot is Copilot, and they will gravitate toward whichever surface provides an answer.

What elevates this beyond a quirky model behavior is a phenomenon Permiso describes as trust transfer. Users have been conditioned, through years of security awareness training, to be skeptical of suspicious text in email bodies. That same skepticism does not extend to AI-generated summary panels.

The attack becomes significantly more dangerous when Copilot’s retrieval scope is taken into account. Microsoft 365 Copilot can access Teams conversations, OneDrive files, SharePoint documents, and meeting notes, depending on licensing and permission configuration.

Permiso confirmed in testing that injected prompts could steer Copilot to pull internal collaboration context, such as recent Teams messages, and embed that context into an attacker-supplied link presented inside the summary.

This creates a one-click exfiltration pathway: the user clicks what appears to be a “Verify your Identity” button, and any internal context incorporated into that link is transmitted to attacker-controlled infrastructure without the user knowingly copying or sharing a single file.

This attack pattern closely parallels CVE-2025-32711 (EchoLeak), discovered by Aim Security, in which hidden prompts inside emails caused Microsoft 365 Copilot to exfiltrate sensitive data via crafted image URLs, demonstrating that XPIA against AI summarization tools is a repeatable, cross-platform vulnerability class, not an isolated incident.

Organizations using Microsoft 365 Copilot should take the following actions:

  • Apply the March 2026 patch immediately — Microsoft confirmed full rollout to all affected surfaces on March 11, 2026.
  • Audit Copilot permissions — Restrict Copilot’s retrieval scope to only what is operationally necessary; limit cross-app access to Teams, OneDrive, and SharePoint where possible.
  • Enable Microsoft Purview sensitivity labels and DLP policies — These controls reduce the blast radius if retrieval-based exfiltration is attempted.
  • Enable Safe Links — Ensure outbound link rendering within Copilot surfaces is subject to URL reputation checks.
  • User awareness — Train staff that AI-generated summary panels can contain attacker-influenced content and are not inherently “system-generated” notifications.
  • Monitor Copilot activity logs — Unusual retrieval patterns across Microsoft 365 tenants may indicate active XPIA exploitation attempts.

Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.

googlenews



Source link