GBHackers

Hackers Exploit GitHub Copilot Flaw to Exfiltrate Sensitive Data


A high-severity flaw in GitHub Copilot Chat recently allowed attackers to silently steal sensitive data like API keys and private source code.

Tracked as CVE-2025-59145 with a critical CVSS score of 9.6, this vulnerability required no malicious code execution. Instead, hackers used a clever prompt injection technique known as “CamoLeak.”

A security researcher publicly disclosed the flaw in October 2025, two months after GitHub patched it by disabling image rendering in Copilot Chat. While the immediate threat is resolved, the attack exposes a dangerous blind spot in AI security.

How the Attack Unfolded

Copilot Chat relies heavily on context to assist developers. When a user asks the AI to review a pull request, it reads the provided description alongside any private repositories the developer can access. CamoLeak weaponized this deep access through a silent, four-step process:

  1. An attacker submitted a malicious pull request, embedding hidden instructions inside invisible markdown comments that human reviewers would never notice.
  2. A targeted developer opened the pull request and asked Copilot to review or summarize the proposed changes.
  3. Copilot ingested the invisible text and followed the attacker’s hidden prompt, searching the victim’s private codebase for valuable secrets.
  4. The AI encoded the discovered data and embedded it into a sequence of image web addresses. As the developer’s browser rendered the chat response, it silently transmitted the stolen secrets to the attacker.

Direct data theft usually fails because GitHub enforces a strict Content Security Policy that blocks images from loading via untrusted external hosts.

Attackers bypassed this defense by routing their theft entirely through GitHub’s own trusted image proxy, known as Camo.

Before launching the attack, hackers built a dictionary of pre-approved, signed Camo web addresses. Each unique address represented a single character of stolen data and pointed to an invisible pixel on an external server.

Because all network requests traveled through official GitHub infrastructure, traditional egress controls and network monitors saw nothing but normal image-loading activity.

This stealthy method was perfectly optimized for stealing short, high-value targets like cloud administration credentials.

CamoLeak may be a GitHub-specific exploit, but the underlying threat applies to any AI assistant that interacts with sensitive data.

Whenever an AI tool processes untrusted content, it creates a potential exfiltration pathway. This includes Microsoft Copilot scanning enterprise emails or Google Gemini summarizing shared workspace documents.

While the specific Camo proxy bypass will not work everywhere, the fundamental attack structure remains highly effective.

Attackers just need to inject hidden instructions into a file the AI will read, force the assistant to grab sensitive data, and extract it through a channel the platform already trusts.

As AI tools gain more access to internal corporate networks, security teams must urgently update their threat models to account for these AI-mediated data breaches.

Follow us on Google News, LinkedIn, and X to Get Instant Updates and Set GBH as a Preferred Source in Google.



Source link