The PromptFix attack tricks AI browsers with fake CAPTCHAs, leading them to phishing sites and fake stores where they auto-complete purchases.
Cybersecurity experts at Guardio Labs have revealed how artificial intelligence (AI) designed to assist users online can be tricked into falling for scams, calling it a “new era of digital threats they call Scamlexity.”
The findings, shared with Hackread.com, detail a unique attack method named PromptFix. This technique uses a fake CAPTCHA, a security check meant to prove a user isn’t a robot, to hide malicious instructions. While a human might easily spot the fake check and ignore it, the AI sees it as a legitimate command to follow.
The report highlights that these AI helpers, called agentic AIs, can be deceived into giving away sensitive information or even making purchases without the user’s knowledge. Researchers demonstrated how these AI browsers, like Perplexity‘s Comet, could be fooled by scams that have been around for years through different tests.
In one test, they created a fake online store that looked just like Walmart. When the AI was asked to buy an item, it didn’t hesitate. It checked the fake website and, without asking for permission, automatically entered saved payment information to complete the purchase.
The researchers emphasize that the AI was so focused on completing its task that it ignored obvious red flags obvious red flags a human would have noticed, such as a suspicious website address and other missing security signals, which a human would have noticed.

In another scenario, the AI browser was given a phishing email that looked like it was from a bank. The AI confidently clicked on the malicious link and, without any warnings, took the user to a fake login page, asking them to enter their personal information. The researchers call this a “perfect trust chain gone rogue,” because the user relies on the AI, never sees the warning signs, and is led directly into a trap.

The report warns that in the future, scammers won’t need to trick millions of people. Instead, they can simply break one AI model and use that same trick to compromise millions of users at the same time. It is highly crucial to make these AI systems safe and secure from the very beginning, rather than adding them later, because the consequences could be detrimental.
“The trust we place in Agentic AI is going to be absolute, and when that trust is misplaced, the cost is immediate,” researchers conclude in their report.
Therefore, AI is going to handle our emails and finances; it needs the same level of protection we use for ourselves. Otherwise, our trusted AI could become an invisible accomplice for hackers.
“As adversaries double down on the use and optimization of autonomous agents for attacks, human defenders will become increasingly reliant on and trusting of autonomous agents for defense,“ said Nicole Carignan, Senior Vice President, Security & AI Strategy, and Field CISO at Darktrace.
“Specific types of AI can perform thousands of calculations in real time to detect suspicious behavior and perform the micro decision-making necessary to respond to and contain malicious behavior in seconds. Transparency and explainability in the AI outcomes are critical to foster a productive human-AI partnership,“ she added.