Cybersecurity firm LayerX has identified a critical vulnerability in OpenAI’s ChatGPT Atlas browser that allows malicious actors to inject harmful instructions into ChatGPT’s memory and execute remote code.
This security flaw poses significant risks to users across all browsers but presents particularly severe dangers for those using the new ChatGPT Atlas browser.
Cross-Site Request Forgery Exploits ChatGPT Access
The vulnerability leverages a Cross-Site Request Forgery (CSRF) attack to compromise ChatGPT users. Attackers can piggyback on victims’ ChatGPT authentication credentials to inject malicious instructions into the AI assistant’s memory feature.
When users subsequently interact with ChatGPT for legitimate purposes, these tainted memories trigger and can execute remote code, potentially granting attackers control over user accounts, browsers, code repositories, or connected systems.
The attack sequence begins when a user logged into ChatGPT clicks a malicious link leading to a compromised webpage.
This malicious page executes a CSRF request that exploits the user’s existing ChatGPT authentication. The exploit then secretly injects hidden instructions into ChatGPT’s memory, tainting the core language model memory without user knowledge.
During the next ChatGPT query, these tainted memories activate, enabling deployment of malicious code.
ChatGPT’s Memory feature, designed to remember user preferences, projects, and style notes across conversations, becomes a persistent attack vector in this scenario.
Once attackers inject malicious instructions into this memory through the CSRF request, ChatGPT effectively becomes an unwitting accomplice in executing harmful commands.
The infection persists across all devices where the account is used, including different computers and browsers, making it extremely difficult to remove and particularly dangerous for users who employ the same account for both work and personal activities.
While this exploit affects ChatGPT users regardless of their browser choice, Atlas users face exponentially higher risks.

LayerX testing revealed that Atlas users are by default logged into ChatGPT, meaning authentication credentials remain constantly available for CSRF attacks.
More concerning, LayerX tested Atlas against over one hundred real-world web vulnerabilities and phishing attacks, discovering that Atlas allowed ninety-seven of one hundred and three attacks to succeed a failure rate exceeding ninety-four percent.
Compared to traditional browsers like Edge, which stopped fifty-three percent of attacks, and Chrome, which blocked forty-seven percent, Atlas successfully stopped only six percent of malicious webpages.
This means Atlas users are approximately ninety percent more vulnerable to phishing attacks than users of established browsers.
The absence of meaningful anti-phishing protections in Atlas significantly increases user exposure to attack vectors that can lead to malicious instruction injection.
LayerX demonstrated a proof-of-concept attack targeting Atlas users engaged in “vibe coding,” where developers collaborate with AI as creative partners rather than line-by-line executors.
In this scenario, attackers inject instructions that cause ChatGPT to generate seemingly legitimate code containing hidden backdoors, data exfiltration mechanisms, or remote code execution capabilities.
The generated scripts might fetch malicious code from attacker-controlled servers and attempt execution with elevated privileges, all while appearing normal to unsuspecting users.
Although ChatGPT includes some defenses against malicious instructions, their effectiveness varies depending on attack sophistication and how unwanted behavior entered Memory.


Cleverly masked malicious code can evade detection entirely, with only subtle warnings that users might easily overlook.
LayerX reported this vulnerability to OpenAI under responsible disclosure procedures while withholding technical details that could enable attack replication.
Follow us on Google News, LinkedIn, and X to Get Instant Updates and Set GBH as a Preferred Source in Google.




