16 Fake ChatGPT Extensions Caught Hijacking User Accounts – Hackread – Cybersecurity News, Data Breaches, AI, and More

16 Fake ChatGPT Extensions Caught Hijacking User Accounts – Hackread – Cybersecurity News, Data Breaches, AI, and More

A group of 16 malicious browser extensions has been caught masquerading as ChatGPT productivity aids in an attempt to hijack user accounts. Discovered by the research firm LayerX Security, these add-ons don’t try to break into ChatGPT itself but wait for a user to log in and then snatch their digital credentials.

The campaign involves at least 16 distinct extensions, all developed by the same threat actor. This seems to be a calculated move to ensure wider reach and keep the campaign alive in case one version was flagged, as others will find their way onto users’ computers.

While the campaign has seen roughly 900 downloads so far, a figure LayerX researchers call a “drop in the bucket” compared to massive scams like GhostPoster, the danger lies in how much trust users place in these tools.

One version even managed to carry a “featured” badge on the Chrome Web Store, giving it some air of authority. LayerX decided to go public now because these “GPT optimisers” are becoming as common as VPNs, and they wanted to stop the threat before it hit a “critical mass.” These findings were shared exclusively with Hackread.com.

Getting A Digital Key to Your Private Life

The scam relies on stealing what are known as session tokens. Think of these as a temporary digital key that tells a website you are already logged in. By grabbing these keys, attackers can “impersonate them, allowing them to access all of the user’s ChatGPT conversations, data, or code,” LayerX’s security researcher and blog post author Natalie Zargarov explained.

The reach of this theft is surprisingly deep. Because many people connect their AI tools to work platforms, the hackers could potentially see into private Slack channels, GitHub code repositories, and Google Drive files.

Moreover, the investigation revealed that the software runs in a high-privilege part of the browser, which means it can “observe or manipulate” data that traditional security software often misses.

Similarities identified in the extensions (Source: LayerX Security)

Campaign Seems A Coordinated Attack

LayerX researchers believe this wasn’t a series of accidents but a single, organised effort. As they probed further, they identified that 15 of these tools were on the Chrome store, while one was found on the Microsoft Edge marketplace. They all share the same messy code and communicate with specific attacker-controlled domains, including chatgptmods.com and Imagents.top.

Another troubling part is that these extensions were mostly uploaded in batches on the same day and used nearly identical icons and descriptions to look legitimate.

“Most extensions in the campaign show relatively low individual installation counts, with only a small subset reaching higher adoption. We hope at LayerX that with this publication, the campaign is stopped at an early stage with minimal impact,” the report concludes.

To stay safe, you must treat any AI-linked extension as a high-risk application. If you have any “helper” tools for ChatGPT installed that you don’t recognise, the best move is to delete them.





Source link