Browser agents don’t always respect your privacy choices

Browser agents don't always respect your privacy choices

Browser agents promise to handle online tasks without constant user input. They can shop, book reservations, and manage accounts by driving a web browser through an AI model. A new academic study warns that this convenience comes with privacy risks that security teams should not ignore.

The report evaluates eight popular browser agents released or updated in 2025. These include ChatGPT Agent, Google Project Mariner, Amazon Nova Act, Perplexity Comet, Browserbase Director, Browser Use, Claude Computer Use, and Claude for Chrome.

The study examined user risk across five areas: agent architecture, handling of unsafe sites, cross site tracking, responses to privacy dialogs, and disclosure of personal data to websites. In total, it identified 30 vulnerabilities, with at least one issue in every product tested.

Component and architecture flaws

The researchers measured privacy risks tied to the component systems each browser agent depends on, the web browser and the language model, and how those parts are assembled. They identified eight vulnerabilities in total. A key issue is the location of the language model. Seven out of eight agents use off device models. This means detailed information about the user’s browser state and each visited webpage is sent to servers controlled by the service provider.

When the model runs on remote servers, users lose control over how search queries and sensitive webpage content are processed and stored. While some providers describe limits on data use, users must rely on service provider policies.

Browser version age is another factor. Browsers release frequent updates to patch security flaws. One agent was found running a browser that was 16 major versions out of date at the time of testing. This software carried known vulnerabilities that could be exploited by a malicious website. The finding highlights the risks that arise when browser updates are not tightly managed in agent deployments.

Inadequate website protections

Web browsers protect users by showing warnings for insecure or malicious sites. Browser agents often bypass these checks, resulting in eight vulnerabilities. The most widespread issue is the absence of warnings for sites on safe browsing lists that flag phishing and malware.

Six of the eight browser agents did not show any safe browsing warnings when directed to a known phishing test page. In these cases, the agent either proceeded without surfacing any alert or failed to indicate that the site was considered dangerous.

Without these warnings, agents may treat malicious sites as trustworthy and continue interacting with them as part of a task. This increases the risk that an agent will prompt a user to enter sensitive information, such as login credentials, on a phishing site.

Agents also showed weaknesses in TLS certificate handling. Two agents did not show warnings for revoked certificates. One agent also failed to warn users about expired and self signed certificates. Trusting connections with invalid certificates leaves agents open to machine-in-the-middle attacks that allow attackers to read or alter submitted information.

In one case, an agent proceeded to a site after the model clicked through a browser warning for a self signed certificate. All agents either upgraded insecure HTTP connections to HTTPS or blocked them, and enforced policies against loading insecure subresources on secure pages.

Cross site tracking failures

Browser agents weaken defenses against cross site tracking, with three vulnerabilities identified in this area. Tracking allows companies to correlate activity across different sites to build user profiles. One common defense is storage partitioning, which isolates third party data such as cookies. Two agents only partitioned a limited subset of stored state. Users of these agents were more exposed to cross site tracking than if they had used a standard browser with default settings.

Another issue involves long term profile state. Four agents saved some form of profile data by default. One of these agents saved profile state without informing users and offered no option to delete it. This prevents users from clearing identifying data that can be reused for tracking. The report also noted one tool that reduced exposure by integrating a content filtering library that blocks analytics and error monitoring resources.

Automated responses to privacy prompts

Agents automatically decide how to respond to common privacy prompts, such as cookie consent banners. This behavior led to five vulnerabilities. In cookie consent tests, four of the eight agents selected the accept all option in at least one scenario. This occurred even when a deny all option was present and equally accessible.

One agent accepted all cookies because it ran an extension designed to suppress cookie banners by approving them. Another accepted all cookies when the banner blocked page content, prioritizing task completion over user privacy. This exposed users to tracking simply to complete the assigned task. One agent avoided the issue by blocking cookie banners at the network level, resulting in no preference being set. Another agent asked the user how to respond, leaving the decision with the person.

For site permission requests such as notifications, one agent automatically granted permission. Other agents generally ignored permission prompts when they could complete the task without responding. In other cases, agents either denied requests by default or inherited browser behavior that grants certain permissions, such as storage access, without user input.

Leaking personal information

Agent decision logic sometimes favored task completion over protecting user information, leading to personal data disclosure. This resulted in six vulnerabilities. Researchers supplied agents with a fictitious identity and observed whether that information was shared with websites under different conditions.

Three agents disclosed personal information during passive tests, where the requested data was not required to complete the task. Another three shared information during active tests, where websites withheld content until details were submitted. In several cases, agents reused information stored in prior chat conversations, personalization settings, connected services, or browser profile data.

The information shared went beyond basic contact details. Some agents disclosed email addresses, ZIP codes, login credentials, and demographic information such as age, gender, sexual orientation, and race. In one instance, an agent attempted to submit a credit card number. Another agent inferred the user’s ZIP code through IP based location and shared it to unlock pricing information.

When agents did not disclose personal data, they either used placeholder values or reported that the requested information was unavailable, even when this prevented task completion.

Improving privacy in browser agents

Browser agent developers can reduce privacy risk by working with browser privacy experts and by running existing test tools on a regular basis. Browser agents combine several complex systems, and small design or code changes can affect privacy in ways that are easy to miss.

Privacy specialists can help teams understand how automation choices interact with browser protections and avoid disabling features or adding shortcuts that weaken them. Browser security relies on system design, including process isolation, storage and network partitioning, and controlled communication between components.

Developers are also encouraged to use automated privacy and security test suites. These tools capture hard won knowledge and check edge cases. Several open test suites already exist, and the researchers plan to release their own test sites and datasets to support repeatable privacy testing.



Source link