HackRead

OpenAI Codex Vulnerability Allowed Attackers to Steal GitHub Tokens


BeyondTrust Phantom Labs researchers have revealed a critical command injection vulnerability in OpenAI’s Codex. The flaw allowed attackers to steal sensitive GitHub OAuth tokens using hidden Unicode characters in branch names, potentially compromising entire enterprise environments.

A substantial security vulnerability has been identified in OpenAI’s Codex, a tool used by countless developers to assist in writing and reviewing code. The flaw could have allowed hackers to steal GitHub Access Tokens, which are the keys that give someone full control over a person’s or a company’s private code repositories.

These findings come from researchers at BeyondTrust Phantom Labs, who found that a simple lack of input sanitization could turn a coding assistant into a potential doorway for data theft.

The Invisible Branch Trick

For your information, tools like Codex need a token to access a programmer’s work. Phantom Labs researchers discovered the system failed to properly check user data, allowing for a command injection through the GitHub branch name. Further probing revealed that attackers could hide malicious instructions using an Ideographic Space, a special Unicode character that looks like a normal space to the human eye.

While a developer might think they are viewing a standard branch named main, a hidden command could be running in the background. “When user-controlled input is passed into these environments without strict validation, the result is not just a bug, it is a scalable attack path,” researchers noted in the blog post shared with Hackread.com. In testing, they successfully forced the system to reveal secret login tokens in plain text.

Scaling the Attack

Researchers note this was not just a threat to individual users, as the flaw affected the ChatGPT website, Codex SDK, and various developer extensions. If a hacker changed a project’s default branch to a malicious version, anyone opening it would have their credentials exfiltrated.

Also worth noting is that the risk extended beyond the cloud. The team, led by Director of Research Fletcher Davis, found that Codex stores sensitive login data in a file called auth.json on a user’s local computer. If a hacker accessed a developer’s machine, they could lift these tokens to move through an entire organisation’s GitHub environment.

Codex attack path (Source: BeyondTrust)

A Rapid Response

Fortunately, the Phantom Labs team was quick to report the flaw to OpenAI on 16 December 2025. This led to an initial hotfix just a week later on 23 December. By 30 January 2026, OpenAI had implemented stronger protections for shell commands and limited the access these tokens have, eventually labelling the issue a “Critical Priority 1” vulnerability on 5 February.

OpenAI has since confirmed the fixes are complete, thanking the researchers for their partnership. Still, it reminds us to be careful with AI tools as they are not just assistants but live environments with high-level access to our most sensitive data.





Source link