The convenience of AI chatbots has come with a hidden cost for nearly a million Chrome users. On December 29, 2025, cyber threat defence experts at OX Security revealed that two popular browser extensions were secretly recording private conversations and sending them to outside servers.
This discovery is part of a disturbing new trend that researchers at Secure Annex have named Prompt Poaching, where attackers specifically target the sensitive questions and proprietary data we feed into tools like ChatGPT.
Malicious Chrome Extensions
The two tools at the centre of OX Research’s investigation are “Chat GPT for Chrome with GPT-5, Claude Sonnet & DeepSeek AI” (600,000 installs) and “AI Sidebar with Deepseek, ChatGPT, Claude and more” (300,000 installs).
Researchers explained in the blog post that these extensions weren’t just random apps; they were designed to look exactly like a legitimate tool called AITOPIA. Because a professional appearance can be deceiving, one of these fakes even managed to trick Google into giving it a Featured badge, making it look safe to the average person.
How the Information is Stolen
The theft begins the moment a user installs these sidebars. The extensions first request permission to collect “anonymous, non-identifiable analytics,” but the second, when a user clicks “allow,” that promised anonymity vanishes.
To steal your data, the software uses a technique called DOM scraping, which essentially allows it to read the text directly off your screen. The malware listens for when you visit chatgpt.com or deepseek.com, assigns you a unique tracking ID called a “gptChatId,” and begins harvesting.
This isn’t just a minor leak; it includes everything from personal search history to secret company code and business strategies. Every 30 minutes, the software bundles up your prompts, the AI’s answers, and even your session tokens or authentication data, then sends them to servers like deepaichats.com or chatsaigpt.com.
If you uninstalled one, the browser would sometimes automatically redirect you to the other, as the developers used the platform Lovable.dev to host fake privacy policies and keep their operation running.
While OX Security reported these threats to Google on December 29, both extensions remained live and downloadable as of January 7, 2026. If you have any AI sidebar installed, you should check your settings at chrome://extensions immediately.
Look for the specific IDs fnmihdojmnkclgjpcoonokmkhjpjechg or inhcgfpbfdjbjogdfjbclgolkmhnooop and remove them. Also, try to avoid any extension that asks for full “read and change” access to your websites, even if it has a verified badge.
This incident shows how trust can be compromised when security checks fail to keep pace with the rapid evolution of AI tools. AI chats feel private, but anything sitting inside a browser can be watched, copied, and sent elsewhere without you noticing.
Until Chrome Web Store policing improves, the safest move is to keep extensions to a minimum, be suspicious of unnecessary permissions, and think twice before sharing sensitive work or personal details with any AI tool running in your browser.
(Photo by Solen Feyissa on Unsplash)
