Microsoft Flags AI Phishing Attack Hiding in SVG Files


Forget the old, error-filled emails you could spot easily. Cybercriminals have completely upgraded their methods, using AI (Artificial Intelligence) to create a new type of phishing scam that can be hard to detect.

Microsoft Threat Intelligence recently detected and blocked a credential phishing campaign on August 18. Their analysis indicated that hackers are likely using Large Language Models (LLMs), which refer to the AI that powers common chatbots, to write complex code that dodges traditional security measures. This limited, yet significant, campaign primarily targeted US-based organisations.

How The Attack Hides In Plain Sight

The attack began with a fraudulent file-sharing email, sent from an already compromised small business email account. The message looked legitimate, but the attached file (23mb – PDF- 6 pages.svg) was the real trick.

While it looked like a PDF, the .svg extension means it was actually a Scalable Vector Graphic (SVG) file. Attackers possibly favour SVG files for such scams because they can easily embed dynamic, interactive code that appears harmless to users and many security tools.

Phishing email sample (Source: Microsoft)

The malicious code inside the file was uniquely disguised. Instead of using standard scrambling techniques (like encryption or random character substitution), the SVG file was structured to look like a legitimate business analytics dashboard, complete with fake elements for chart bars.

The actual, harmful payload was hidden within this trap by encoding it using a long sequence of regular business terms like “revenue,” “operations,” and “risk,” to make the file appear as standard data, disguising its true intent to redirect users to a fake sign-in page to steal their credentials.

Sequence of business-related terms (Fig. 1) and its conversion into malicious code (Fig.2) – (source: Microsoft)

The AI vs. AI Defence

To figure out how the attackers made the code so tricky, Microsoft used its own AI analysis tool, Security Copilot. The tool assessed that the code was “not something a human would typically write from scratch due to its complexity, verbosity, and lack of practical utility,” researchers noted in the blog post. This meant the over-engineered, systematic code structure was most likely a product of an AI model, not a human programmer.

While the rise of AI-assisted attacks is worrying, this case proves they are not unbeatable. The campaign was successfully blocked by Microsoft Defender for Office 365’s own AI protection systems.

These systems look for behavioural red flags that AI cannot easily hide, such as the use of self-addressed emails with recipients hidden in the BCC field, the suspicious combination of file type and name, and the eventual redirect to a known malicious website.

The lesson here is that as attackers increasingly rely on AI to make their scams sneakier and more effective, security teams must constantly adapt and find new ways to stay ahead.

Expert Insights

Following Microsoft’s findings, several security experts shared their perspectives exclusively with Hackread.com. Anders Askasen, VP of Product Marketing at Radiant Logic, stated that AI-driven phishing shows that “the frontline isn’t the payload, it’s the person behind the login.”

He added that to counter this “AI-scaled deception,” organizations must focus on identity observability, unifying identity data to “see when an account behaves out of character.”

Similarly, Andrew Obadiaru, CISO at Cobalt, noted that AI is fundamentally changing the game by creating code that is “camouflage that blends seamlessly into enterprise workflows.”

He concluded that security teams must shift their focus to behavioral detection, red-teaming against AI-assisted tactics, and shortening remediation cycles. The core lesson here is that as attackers increasingly rely on AI to make their scams more secret and effective, security teams must constantly adapt and find new ways to stay ahead.





Source link

About Cybernoz

Security researcher and threat analyst with expertise in malware analysis and incident response.