GitHub Copilot Flaw Allows Attackers to Steal Source Code from Private Repositories


A critical weakness in GitHub Copilot Chat discovered in June 2025 exposed private source code and secrets to attackers.

Rated CVSS 9.6, the vulnerability combined a novel Content Security Policy bypass with remote prompt injection.

By embedding hidden prompts in pull requests, attackers could exfiltrate private repository data and control Copilot’s responses, including injecting malicious code suggestions or links.

Background of the Flaw

GitHub Copilot Chat is an AI assistant integrated directly into GitHub’s interface. It helps developers by answering questions, explaining code, and suggesting implementations based on the project context.

Because Copilot Chat accesses repository contents, it must handle sensitive data securely. The richer the context, the greater the potential attack surface.

In this instance, attackers leveraged Copilot’s context awareness to slip instructions into an otherwise innocuous pull request description.

Invisible comments a documented GitHub feature allowed prompts to remain hidden from human readers while still influencing Copilot.

When any user viewed the pull request, the hidden prompt loaded into their Copilot context.

Exploiting the Flaw

The attack chain began by placing a hidden comment containing a malicious prompt into a pull request description, as reported by Legit Security.

This comment did not appear in the visible diff or web interface, but GitHub still processed it. Once processed, Copilot loaded the hidden instructions and executed them with the permissions of the viewing user.

prompt injection
prompt injection

Initial tests used a simple “HOORAY” message, but the researcher soon escalated to full command sets: searching private code for secrets like AWS_KEY, encoding repository files in base16, and embedding them into image URLs for exfiltration.

By default, GitHub’s Content Security Policy blocks external image loads. However, when rendering Markdown, GitHub rewrites third-party image URLs to pass through its Camo proxy, appending an HMAC signature to each URL.

 prompt was injected into context  prompt was injected into context 
 prompt was injected into context 

The researcher pre-generated a dictionary of Camo URLs for every alphabet character and symbol, embedding it into the malicious prompt.

Copilot then reconstructed full exfiltration URLs at runtime by assembling image tags from this dictionary, complete with random cache-busting parameters.

When a victim’s browser requested those images, the Camo proxy forwarded the requests to the attacker’s server, delivering private code in transparent 1×1 pixels.

This inventive use of the Camo proxy bypass allowed silent, continuous extraction of repository content without alerting the user.

GitHub’s ResponseGitHub’s Response
GitHub’s Response

The proof-of-concept demonstrated full codebase theft, secret harvesting, and delivery of malicious “Copilotevil” packages to unwitting developers.

After the vulnerability report, GitHub disabled image rendering entirely in Copilot Chat to stop the Camo bypass.

A patch rolled out by August 14, 2025, removed the ability to process Markdown image tags in chat responses.

This fix closed the CSP bypass and remote prompt injection vector, restoring the confidentiality of private repository contents.

Developers are urged to update their Copilot Chat integrations and review pull requests for unusual hidden content. Continuous vigilance is necessary to safeguard AI-assisted workflows against emerging attack techniques.

Follow us on Google News, LinkedIn, and X to Get Instant Updates and Set GBH as a Preferred Source in Google.



Source link

About Cybernoz

Security researcher and threat analyst with expertise in malware analysis and incident response.