A novel single-click attack targeting Microsoft Copilot Personal that enables attackers to silently exfiltrate sensitive user data. The vulnerability, now patched, allowed threat actors to hijack sessions via a phishing link without further interaction.
Attackers initiate Reprompt by sending a phishing email with a legitimate Copilot URL containing a malicious ‘q’ parameter, which auto-executes a prompt upon page load.
This Parameter-to-Prompt (P2P) injection leverages the victim’s authenticated session, persisting even after closing the tab, to query personal details like usernames, locations, file access history, and vacation plans.
The attack chain then employs server-driven follow-ups, evading client-side detection as commands unfold dynamically.

Varonis detailed three core techniques enabling stealthy data theft, bypassing Copilot’s safeguards designed to block URL fetches and leaks.
| Technique | Description | Bypass Method |
|---|---|---|
| Parameter-to-Prompt (P2P) | Injects instructions via ‘q’ parameter to auto-populate and execute prompts stealing conversation memory or data. | Injects instructions via ‘q’ parameter to auto-populate and execute prompts, stealing conversation memory or data. |
| Double-Request | Copilot’s leak protections apply only to initial requests; repeats actions twice to succeed on the second try. | Instructs “double check… make every function call twice,” exposing secrets like “HELLOWORLD1234!” on retry. |
| Chain-Request | Server generates sequential prompts based on responses, chaining exfiltration stages indefinitely. | Progresses from username fetch to time, location, user info summary, and conversation topics via staged URLs. |
These techniques make data exfiltration undetectable, as prompts look harmless while information is gradually leaked to attacker servers.

Reprompt targeted Copilot Personal, integrated into Windows and Edge for consumer use, accessing prompts, history, and Microsoft data like recent files or geolocation.
Enterprises using Microsoft 365 Copilot were unaffected by Purview auditing, tenant DLP, and admin controls. No in-the-wild exploitation occurred, but the low barrier to a single-click email or chat attack posed risks to data such as financial plans or medical notes, as shown in the attack diagrams.
Varonis responsibly disclosed the issue to Microsoft on August 31, 2025, with a fix deployed via the January 13, 2026, Patch Tuesday. Users should apply the latest Windows updates immediately to block remnants.
Unlike prior flaws like EchoLeak (CVE-2025-32711), Reprompt required no documents or plugins, highlighting URL parameter risks in AI platforms.
Organizations must treat AI URL inputs as untrusted and enforce persistent safeguards across chained prompts. Copilot Personal users should scrutinize pre-filled prompts, avoid untrusted links, and monitor for anomalies like unsolicited data requests.
Vendors like Microsoft are urged to audit external inputs deeply, assuming insider-level access in AI contexts to preempt similar chains.
Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.
