Grammarly and QuillBot are among widely used Chrome extensions facing serious privacy questions

Grammarly and QuillBot are among widely used Chrome extensions facing serious privacy questions

A new study shows that some of the most widely used AI-powered browser extensions are a privacy risk. They collect lots of data and require a high level of browser access.

The research was conducted by Incogni, which analyzed 442 AI-powered Google Chrome extensions for its 2026 privacy risk report. The study reviewed extensions across eight categories and assessed their permissions, declared data collection practices, and security risk scores.

High-impact access is common

Every extension in the study needed some level of permission to work. In many cases, that meant being able to read what was happening on websites, track activity inside browser tabs, or even inject scripts directly into pages. With that kind of access, these tools can see far more than users might realize, including email, internal dashboards, cloud applications, and collaboration platforms.

52% of the extensions collected some form of user data. Many collected personally identifiable information, including personal communications, location data, or detailed website content. These practices appeared across both niche tools and widely used products.

The most common sensitive permission was scripting. This permission allows an extension to run code inside web pages, which can alter content or capture input. From a security perspective, scripting access creates long-term exposure when extensions change ownership, update code, or become compromised.

Popular tools rank high for potential damage

Among extensions with millions of downloads, Grammarly and QuillBot ranked as the most potentially privacy-damaging. Both tools collected multiple categories of user data and required permissions that allowed deep interaction with browser content.

Incogni researchers note that Grammarly collected website content, personal communications, and user activity data. User activity can include interaction patterns such as keystrokes, scrolling behavior, and navigation events. QuillBot collected similar categories of content and communication data.

Both extensions required scripting and activeTab permissions. ActiveTab allows temporary access to the current browser tab, while scripting enables code injection into web pages. Security risk scoring showed a low likelihood of malicious use for both tools. The study still ranked them high due to the breadth of access and the scale of their installed base.

“AI-powered extensions can be genuinely useful, but most users have very little visibility into how much access they’re granting when they install them,” said Darius Belejevas, head of Incogni. “Some of these tools can read everything you type, see every page you visit, or inject code directly into websites. That level of access deserves far more attention than it typically gets.”

Categories with the highest exposure

When extensions were grouped by function, programming and mathematical helpers ranked highest for average privacy risk. Tools in this category often requested broad permissions and interacted with sensitive environments such as code repositories, cloud notebooks, spreadsheets, and learning platforms. They required access to active tabs and scripting permissions to provide inline assistance or perform calculations in real time.

These extensions often handled content users treat as internal or confidential, including proprietary code, credentials embedded in scripts, and unpublished research. Even when declared data collection remained limited, the access required to operate increased overall exposure.

Meeting assistants and audio transcribers ranked close behind. These tools often operated during live meetings and recordings, where sensitive conversations, shared screens, and internal documents are common. Their permission sets reflected this role, with access tied to active tabs, audio streams, and meeting interfaces.

Extensions in this category frequently combined broad permissions with larger volumes of collected data. That combination raised their average risk scores, particularly when conversations or transcripts were processed outside the local device.

Writing assistants ranked next. These tools usually required access across websites so they could operate wherever users type. That access often included the ability to read and change page content on all URLs, which expanded exposure across email clients, collaboration tools, and internal portals.

Personal assistants and general-purpose tools followed a similar pattern. They focused on automation and workflow support, which led to permission requests spanning tabs, browsing activity, and background processes. Even when data collection disclosures were limited, the access profile increased risk.

Translators combine access with low misuse signals

Translator extensions stood out for a different reason. Tools in this category scored high on potential impact because they required permissions that allowed them to read and change content across websites. These permissions enabled real-time translation and also provided broad visibility into browsing activity.

Risk likelihood scores for translators remained low across the group. This placed them in a category where extensive access exists without strong indicators of misuse. Google Translate, eJOY AI Dictionary, and Immersive Translate ranked highest due to permission depth or data collection practices.

Several translator extensions stated that they did not collect user data. Researchers noted that these claims rely on developer disclosures and cannot be verified without access to the source code.



Source link