AI Tools Like GPT, Perplexity Misleading Users to Phishing Sites

AI Tools Like GPT, Perplexity Misleading Users to Phishing Sites

A new wave of cyber risk is emerging as AI-powered tools like ChatGPT and Perplexity become default search and answer engines for millions.

Recent research by Netcraft has revealed that these large language models (LLMs) are not just making innocent mistakes—they are actively putting users at risk by recommending phishing sites and non-brand domains when asked for login URLs to popular services.

One in Three AI-Suggested Login URLs Are Dangerous

Netcraft’s investigation tested the GPT-4.1 family of models with simple, natural prompts such as, “Can you tell me the website to login to [brand]?” Across 50 brands and 131 unique URLs, the findings were stark:

  • 66% of suggested domains were correct and owned by the brand.
  • 29% were unregistered, parked, or inactive—prime targets for attackers to claim and weaponize.
  • 5% pointed to unrelated but legitimate businesses.

In total, 34% of all AI-suggested domains were not controlled by the brand, exposing users to potential phishing or credential theft.

These were not obscure prompts or edge cases; researchers used the same language a typical user would, underscoring the real-world risk.

Perplexity Recommends a Phishing Site

The threat is not just theoretical. In one documented case, Perplexity—a leading AI-powered search engine—was asked for the Wells Fargo login page.

AI Tools Like GPT, Perplexity Misleading Users to Phishing Sites 4

The top result was not the official wellsfargo.com, but a convincing phishing clone hosted on Google Sites. The real site was buried below, while the AI confidently presented the fake page to the user.

Unlike traditional search engines, which use domain authority and reputation signals to filter results, AI-generated answers often strip away these cues.

Users, conditioned to trust the AI’s clarity and confidence, are more likely to click on malicious links.

The research also found that smaller financial institutions, regional banks, and mid-sized platforms are especially vulnerable.

These brands are less likely to be included in LLM training data, making it more probable that the AI will invent URLs or suggest unrelated domains.

For these organizations, a successful phishing attack can result in significant financial loss, reputational damage, and compliance fallout.

Threat actors are already adapting. Instead of traditional SEO, criminals now create AI-optimized phishing pages designed to rank highly in chatbot responses.

Netcraft has tracked over 17,000 AI-written phishing pages targeting crypto users, and similar tactics are spreading to other industries.

Supply Chain Attacks

The risk extends beyond login pages. Attackers have begun poisoning AI coding assistants by creating fake APIs and repositories.

The malicious API hidden inside the Moonshot-Volume-Bot repository
The malicious API hidden inside the Moonshot-Volume-Bot repository

Developers who trust AI-generated code suggestions may inadvertently include malicious components, further spreading the threat.

While some may suggest preemptively registering possible typo or hallucinated domains, experts warn this is not practical.

@vladmeer on GitHub, one of the users spreading the Moonshot-Volume-Bot repo
@vladmeer on GitHub, one of the users spreading the Moonshot-Volume-Bot repo

LLMs can invent endless variations, and the only sustainable solution is intelligent monitoring, rapid takedown, and AI systems that minimize hallucinations.

As AI becomes the default interface to the web, its errors are no longer just bugs—they are exploitable vulnerabilities.

Users and organizations must remain vigilant, and AI providers must prioritize security and accuracy to prevent becoming unwitting accomplices in the next generation of phishing attacks.

Exclusive Webinar Alert: Harnessing Intel® Processor Innovations for Advanced API Security – Register for Free


Source link