LLMjacking attacks target DeepSeek, racking up huge cloud costs. Sysdig reveals a black market for LLM access has emerged, with ORP operators providing unauthorized access to stolen accounts. Find out how attackers steal access and monetize LLM usage.
The Sysdig Threat Research Team (TRT) has observed the rapid evolution of LLMjacking attacks since their initial discovery in May 2024, including the expansion to new Large Language Models (LLMs) like DeepSeek. Just days after DeepSeek-V3’s release, it was, reportedly, integrated into ORP instances, demonstrating the speed with which attackers adapt.
Exploitation of DeepSeek API Keys and Monetization of LLMjacking
According to researchers, similarly, DeepSeek-R1 was incorporated into these platforms shortly after its release. Multiple ORPs have been found populated with DeepSeek API keys, indicating active exploitation of this new model.
LLMjacking, driven by the high costs associated with cloud-based LLM usage, involves attackers compromising accounts to utilize these expensive services without paying. As per TRT’s updated findings, LLMjacking has become a well-established attack vector, with online communities actively sharing tools and techniques.
They observed a rise in the monetization of LLMjacking where LLM access is being sold through ORPs (OpenAI Reverse Proxies) with one instance reportedly selling access for $30 per month. Operators often underestimate the costs associated with LLM usage, whereas researchers noticed that in one instance, with an uptime of just 4.5 days, nearly $50,000 in costs were generated, with Claude 3 Opus being the most expensive.
The Scale of Resource Exploitation
The total token (LLM-generated words, character sets, or combinations of words/punctuation) usage across observed ORPs exceeded two billion, highlighting the scale of resource exploitation. The victims are legitimate account holders whose credentials have been stolen.
ORP usage remains a popular method for LLMjacking. ORP servers, acting as reverse proxies for various LLMs, can be exposed through Nginx or dynamic domains like TryCloudflare, effectively masking the attacker’s source. These proxies often contain numerous stolen API keys from different providers like OpenAI, Google AI, and Mistral AI, enabling attackers to provide LLM access to others.
“Sysdig TRT found over a dozen proxy servers using stolen credentials across many different services, including OpenAI, AWS, and Azure. The high cost of LLMs is the reason cybercriminals (like the one in the example below) choose to steal credentials rather than pay for LLM services,” researchers noted in the blog post.
Online Communities Exploiting LLMjacking
Online communities like 4chan and Discord facilitate the sharing of LLM access through ORPs. Rentry.co is used for sharing tools and services. Researchers discovered numerous ORP proxies, some with custom domains and others using TryCloudflare tunnels, in LLM prompt logs within honeypot environments, tracing back to attacker-controlled servers.
Credential theft is a significant aspect of LLMjacking, where attackers target vulnerable services and use verification scripts to identify credentials for accessing ML services. Public repositories also provide exposed credentials. Customized ORPs, often modified for privacy and stealth, are used to access stolen accounts.
To combat LLMjacking, securing access keys and implementing strong identity management are crucial. Best practices include avoiding hardcoding credentials, using temporary credentials, regularly rotating access keys, and monitoring for exposed credentials and suspicious account behaviour.