Critical GitHub Copilot Vulnerability Let Attackers Exfiltrate Source Code From Private Repos


A critical vulnerability in GitHub Copilot Chat, rated 9.6 on the CVSS scale, could have allowed attackers to exfiltrate source code and secrets from private repositories silently.

The exploit combined a novel prompt injection technique with a clever bypass of GitHub’s Content Security Policy (CSP), granting the attacker significant control over a victim’s Copilot instance, including the ability to suggest malicious code or links. The vulnerability was reported responsibly via HackerOne, and GitHub has since patched the issue.

GitHub Copilot Vulnerability

The attack began by exploiting GitHub Copilot’s context-aware nature. The AI assistant is designed to use information from a repository, such as code and pull requests, to provide relevant answers.

Legit Security researchers found that they could embed a malicious prompt directly into a pull request description using GitHub’s “invisible comments” feature.

While the comment itself is hidden from view in the user interface, Copilot would still process its contents. This meant an attacker could create a pull request containing a hidden malicious prompt, and any developer who later used Copilot to analyze that pull request would have their session compromised.

Because Copilot operates with the permissions of the user making the request, the injected prompt could command the AI to access and manipulate data from the victim’s private repositories.

google

Bypassing Security With A URL Dictionary

A major hurdle for the attacker was GitHub’s strict Content Security Policy (CSP), which prevents the AI from leaking data to external domains.

GitHub uses a proxy service called Camo to securely render images from third-party sites. Camo rewrites external image URLs into signed camo.githubusercontent.com links, and only URLs with a valid signature generated by GitHub are processed.

This prevents attackers from simply injecting an tag to send data to their own server. To circumvent this, the researcher devised an ingenious method.

They pre-generated a dictionary of valid Camo URLs for every letter and symbol. Each URL pointed to a 1×1 transparent pixel on a server they controlled, according to a legit Security report.

The final injected prompt instructed Copilot to find sensitive information in a victim’s private repository, such as an AWS key or a zero-day vulnerability description.

It would then “draw” this information as a sequence of invisible images using the pre-generated Camo URL dictionary.

When the victim’s browser rendered these images, it sent a series of requests to the attacker’s server, effectively leaking the sensitive data one character at a time.

The proof-of-concept demonstrated the successful exfiltration of code from a private repository. In response to the disclosure, GitHub remediated the vulnerability on August 14, 2025, by completely disabling all image rendering within the Copilot Chat feature, neutralizing the attack vector.

Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.

googlenews



Source link

About Cybernoz

Security researcher and threat analyst with expertise in malware analysis and incident response.