Researchers detail “Claudy Day” flaws in Claude AI that could enable data theft using fake Google Ads, hidden prompts, and built-in features.
Cybersecurity researchers have identified a new way that could be exploited by hackers to bypass the safety systems of the popular AI assistant, Claude AI. The discovery, named Claudy Day by the team at Oasis Security, reveals how three separate cracks in the platform’s security can be chained together to quietly steal a user’s private information.
The Hidden Message in the Link
The first step of the attack involves the way we start new chats. For your information, Claude allows users to click a link that automatically fills the chat box with a greeting. However, researchers noted that they could hide secret orders inside these links using HTML tags, the basic code used to build websites.
When a user clicks one of these rigged links, they might only see a simple word like Summarize in the text box. But the AI actually reads hidden instructions tucked inside the code that the person cannot see, which is a technique known as prompt injection.
It basically tricks the AI into following a hacker’s command instead of the user’s, which could be anything, like telling the AI to scan old chats for sensitive details about your health or finances. According to researchers, this allows an attacker to “embed hidden instructions in a pre-filled chat URL that the user cannot see but that the agent fully processes.”
A Search Result You Can Trust?
You might wonder how an attacker gets someone to click a weird link in the first place. As per Oasis Security’s technical report, the hackers don’t need to send dodgy emails, as they used a flaw on the claude.com website to create Google Search ads that look 100% official.
We generally trust the top results on Google; by using an open redirect vulnerability, the attackers made links that technically started with the trusted Claude web address, so Google approved the ads. This created a trap with “no phishing emails, no suspicious links, just a normal-looking search result,” allowing targeted victim delivery. The victim remains unsuspicious because the URL belongs to a reputable company.
The Quiet Escape
The final piece of the puzzle is how the data leaves the building, a process called data exfiltration. Even though Claude has a digital sandbox, researchers found a loophole in the system’s official beta feature- Anthropic Files API, where they could force the AI to upload stolen summaries to an attacker-controlled account.
This system allows for huge amounts of data to be moved, up to 500 MB per file and 100 GB per organisation, and researchers noted that this creates “a complete attack pipeline, from targeted victim delivery to silent data exfiltration.”
Oasis Security shared these findings with Anthropic through a responsible disclosure program. While the prompt injection issue is now fixed, the team suggested that users should carefully monitor approval before an AI agent uses powerful tools for the first time, ensuring they remain in control.

Experts’ Comments:
Sharing his thoughts on the matter with Hackread.com, Andrew Bolster, Senior R&D Manager at Black Duck, noted that these findings support the sentiment that while assistants like Claude are a boon, they represent a risk called the “Lethal Trifecta.”
“That’s where agents are exposed to untrusted content (in this case, the URL parameter injection), access to private data, and the ability to externally communicate,” Bolster said. He added that security leaders must prevent AI assistants from being “socially engineered into giving out sensitive or protected information or access.”
Also sharing exclusive comments with Hackread.com, Saumitra Das, Vice President of Engineering at Qualys, stated, “The Claudy Day attack chain highlights a new reality: the prompt itself is now an attack surface.”
He noted that because the attack uses legitimate endpoints and redirects, it looks like normal traffic. “AI agents need to be treated like privileged service accounts, with strict controls over what they can access, what tools they can use, and where data can be sent,” Das concluded, warning that users are currently “dangerously skipping permission checks” to avoid interrupting the AI.

