Exclusive: OpenAI’s Atlas browser — and others — can be tricked by manipulated web content

Exclusive: OpenAI’s Atlas browser — and others — can be tricked by manipulated web content

As AI browser agents enter the market promising to help people shop, hire employees  or assist with other online tasks, security researchers are warning that the information these programs collect from the internet can be manipulated and corrupted without anyone ever realizing it.

In new research shared exclusively with CyberScoop, AI cybersecurity firm SPLX highlighted vulnerabilities in ChatGPT Atlas, OpenAI’s newly released browser agent, as well as ChatGPT and Perplexity AI. Based on a simple change in the user-agent header, the website could send clandestine, cloaked information to the underlying LLM that influences its behavior and decision-making.

SPLX AI engineer Ivan Vlahov told CyberScoop that his team built a website capable of displaying different content depending on the visitor.  To a human user, the site looked like a standard professional biography for a product designer.

How the website of Zerphina Quortane, a fictional designer created by SPLX researchers, looks when a human visitor is detected. (Source: SPLX)

However, if the website detected an AI crawler, such as those used in Atlas, GPT or Perplexity, a separate server would deliver an alternate, hidden version filled with negative commentary about the designer.

“It’s very easy to serve different content based on the header,” Vlahov said.

Exclusive: OpenAI’s Atlas browser — and others — can be tricked by manipulated web content
A malicious, cloaked version of Zerphina Quortane’s website, served only to AI crawlers. (Source: SPLX) 

Malicious hackers could use the technique to launch smear campaigns about individuals or organizations, knowing that browser agents searching for those same names or terms would find the manipulated information. Meanwhile, scammers could potentially show agents fake promotions or discounts, displaying a completely different set of prices or products to the agent than what appears to actual visitors on a legitimate website.

Vlahov noted that scammers often used similar SEO-like schemes with online ads before Google began blocking for ads that relied on similar manipulation. However, he claimed OpenAI’s terms of service do not appear to substantively address the problem.

“There’s no explicit terms of service for OpenAI and ChatGPT; [they] don’t specifically disallow this behavior from websites,” he said. “Google for example … they’ll block your page and it won’t appear anymore on their search results. The first step for OpenAI would be to start implementing some verification methods and actually banning bad actors.”

In another test, the team explored how such weaknesses could be used to manipulate online job recruiting. They generated a fictional job posting with specific evaluation criteria, as well as online resume and profiles for five job candidates, with each hosted on a separate page. All resumes “looked realistic and well structured, complete with plausible work histories and skill descriptions.”

One of the fake candidates, “Natalie Carter,” had the weakest qualifications according to her human-viewable website, and was given the lowest score by the AI models. However, if Natalie’s webpage detected the presence of an AI crawler, it was sent a different page with inflated credentials, titles, and accomplishments.

As a result, Natalie sailed through the AI screening process, receiving the highest score and beating the next ranked candidate by 10 points.

Vlahov noted one of the ironies of this flaw is that, if a user does somehow notice that the LLM is processing different information on the backend, they might just assume it’s an example of hallucination, another well-known challenge with AI models.

“Even if the chatbot says something bad about a person or provides some hurtful or bad content that shouldn’t be there — even when the user clicks the link — it still opens the normal website where everything is fine,” he said. “It feels like a hallucination.”

OpenAI did not immediately respond to questions about the research Tuesday. Vlahov and SPLX officials told CyberScoop that OpenAI has not responded to their previous attempts to contact the company about similar research.

Other researchers and tech executives highlighted additional concerns with Atlas.

LayerX said this week it had discovered a way to piggyback off ChatGPT’s authentication protocols to inject hidden instructions into the LLM’s memory, even allowing for remote code execution in some instances.

“The vulnerability affects ChatGPT users on any browser, but it is particularly dangerous for users of OpenAI’s new agentic browser: ChatGPT Atlas,” Or Eshed, the security firm’s cofounder and CEO, wrote Monday. “LayerX has found that Atlas currently does not include any meaningful anti-phishing protections, meaning that users of this browser are up to 90% more vulnerable to phishing attacks than users of traditional browsers like Chrome or Edge.”

There are caveats worth noting: the user must already be logged in to ChatGPT and hold a valid authentication cookie or token in their browser. The user must also first click a malicious link for the exploit to work.

Exclusive: OpenAI’s Atlas browser — and others — can be tricked by manipulated web content
An illustration of a memory corruption attack impacting ChatGPT and Atlas, discovered by researchers at LayerX. (Source: LayerX) 

Business users reported some hiccups as well. Pete Johnson, chief technology officer for MongoDB, said after installing the Atlas browser he became “curious” how it stored cached data. He quickly discovered a number of concerns.  

He wrote that while “it is standard practice on the Mac for a browser to store oAuth tokens in a SQLite database, what I discovered that apparently isn’t standard practice is that by default the ChatGPT Atlas install has 644 permissions on that file (making it accessible to any process on your system).” Additionally, unlike other standard browsers, Atlas “isn’t using keychain to encrypt the oAuth tokens within that SQLite database (which means that those tokens are queryable and then usable).”

Johnson, who described himself as “hardly a security expert,” said his script was later validated by a MongoDB security specialist. He said other users have since reported the same flaw, while others did receive encrypted OAuth tokens.

At the same time security researchers are poking around AI browser agents, global standards organizations are tracking worrying signals that many U.S. businesses are “sleepwalking” into an AI governance crisis.

New research set to be released this week by the British Standards Institution, the United Kingdom’s national standards body, analyzed 100 multinational annual reports and surveyed 850 senior business leaders around the world. 

The results, shared exclusively in advance with CyberScoop, found that U.S. businesses are lagging behind the world in planning for the safe and responsible use of AI, even as the nation’s government and business leaders have seemingly gone all in on the technology from an investment and adoption perspective.

For example, just 17.5% of U.S. business leaders reported having an AI governance program in place, compared to 24% worldwide. Meanwhile, just 1 in 4 U.S. businesses restrict their employees from using unauthorized AI tools, and a quarter of business leaders were aware of the data their AI tools were trained on.

Their findings concluded that U.S. “businesses showcase … a striking absence of guardrails to prevent harmful or irresponsible use of AI due to companies’ ambition in taking part in the AI gold rush,” according to a press release viewed by CyberScoop.

Derek B. Johnson

Written by Derek B. Johnson

Derek B. Johnson is a reporter at CyberScoop, where his beat includes cybersecurity, elections and the federal government. Prior to that, he has provided award-winning coverage of cybersecurity news across the public and private sectors for various publications since 2017. Derek has a bachelor’s degree in print journalism from Hofstra University in New York and a master’s degree in public policy from George Mason University in Virginia.



Source link