AI browsers could leave users penniless: A prompt injection warning

AI browsers could leave users penniless: A prompt injection warning

Artificial Intelligence (AI) browsers are gaining traction, which means we may need to start worrying about the potential dangers of something called “prompt injection.”

Large language models (LLMs)—like the ones that power AI chatbots including ChatGPT, Claude, and Gemini—are designed to follow “prompts,” which are the instructions and questions that people provide when looking up info or getting help with a topic. In a chatbot, the questions you ask the AI are the “prompts.” But AI models aren’t great at telling apart the types of commands that are meant for their eyes only (for example, hidden background rules that come directly from developers, like “don’t write ransomware“) from the types of requests that come from users.

To showcase the risks here, the web browser developer Brave—which has its own AI assistant called Leo—recently tested whether it could trick an AI browser into reading dangerous prompts that harm users. And what the company found caused alarm, as they wrote in a blog this week:

“As users grow comfortable with AI browsers and begin trusting them with sensitive data in logged in sessions—such as banking, healthcare, and other critical websites—the risks multiply. What if the model hallucinates and performs actions you didn’t request? Or worse, what if a benign-looking website or a comment left on a social media site could steal your login credentials or other sensitive data by adding invisible instructions for the AI assistant?”

Prompt injection, then, is basically a trick where someone inserts carefully crafted input in the form of an ordinary conversation or data, to nudge or outright force an AI into doing something it wasn’t meant to do.

What sets prompt injection apart from old-school hacking is that the weapon here is language, not code. Attackers don’t need to break into servers or look for traditional software bugs, they just need to be clever with words.

For an AI browser, part of the input is the content of the sites it visits. So, it’s possible to hide indirect prompt injections inside web pages by embedding malicious instructions in content that appears harmless or invisible to human users but is processed by AI browsers as part of their command context.

Now we need to define the difference between an AI browser and an agentic browser. An AI browser is any browser that uses artificial intelligence to assist users. This might mean answering questions, summarizing articles, making recommendations, or helping with searches. These tools support the user but usually need some manual guidance and still rely on the user to approve or complete tasks.

But, more recently, we are seeing the rise of agentic browsers, which are a new type of web browser powered by artificial intelligence, designed to do much more than just display websites. These browsers are designed to actually take over entire workflows, executing complex multi-step tasks with little or no user intervention, meaning they can actually use and interact with sites to carry out tasks for the user, almost like having an online assistant. Instead of waiting for clicks and manual instructions, agentic browsers can navigate web pages, fill out forms, make purchases, or book appointments on their own, based on what the user wants to accomplish.

For example, when you tell your agentic browser, “Find the cheapest flight to Paris next month and book it,” the browser will do all the research, compare prices, fill out passenger details, and complete the booking without any extra steps or manual effort—provided it has all the necessary details of course, which are part of the prompts the user feeds the agentic browser.

Are you seeing the potential dangers of prompt injections here?

What if my agentic browser gets new details while visiting a website? I can imagine criminals setting up a website with extremely competitive pricing just to attract visitors, but the real goal is to extract the payment information which the agentic browser needs to make purchases on your behalf. You could end up paying for someone else’s vacation to France.

During their research, Brave found that Perplexity’s Comet has some vulnerabilities which “underline the security challenges faced by agentic AI implementations in browsers.”

The vulnerabilities allow an attack based on indirect prompt injection, which means the malicious instructions are embedded in external content (like a website, or a PDF) that the browser AI assistant processes as part of fulfilling the user’s request. There are various ways of hiding that malicious content from a casual inspection. Brave uses the example of white text on a white background which AI browsers have no problem reading and a human would not see without closer inspection.

To quote a user on X:

“You can literally get prompt injected and your bank account drained by doomscrolling on reddit”

To prevent this type of prompt injection, it is imperative that agentic browsers understand the difference between user-provided instructions and web content processed to fulfill the instructions and treat them accordingly.

Perplexity has attempted twice to fix the vulnerability reported by Brave, but it still hasn’t fully mitigated this kind of attack as of the time of this reporting.

Safe use of agentic browsers

While it’s always tempting to use the latest gadgets this comes with a certain amount of risk. To limit those risks when using agentic browsers you should:

  • Be cautious with permissions: Only grant access to sensitive information or system controls when absolutely necessary. Review what data or accounts the agentic browser can access and limit permissions where possible.
  • Verify sources before trusting links or commands: Avoid letting the browser automatically interact with unfamiliar websites or content. Check URLs carefully and be wary of sudden redirects or unexpected input requests.
  • Keep software updated: Ensure the agentic browser and related AI tools are always running the latest versions to benefit from security patches and improvements against prompt injection exploits.
  • Use strong authentication and monitoring: Protect accounts connected to agentic browsers with multi-factor authentication and review activity logs regularly to spot unusual behavior early.
  • Educate yourself about prompt injection risks: Stay informed on the latest threats and best practices for safe AI interactions. Being aware is the first step to preventing exploitation.
  • Limit sensitive operations automation: Avoid fully automating high-stakes transactions or actions without manual review. Agentic browsers should assist, but critical decisions benefit from human oversight. For example: limit the amount of money it can spend without your explicit permission or always let it ask you to authorize payments.
  • Report suspicious behavior: If an agentic browser acts unpredictably or asks for strange permissions, report it to the developers or security teams immediately for investigation.

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.


Source link

About Cybernoz

Security researcher and threat analyst with expertise in malware analysis and incident response.