The world of artificial intelligence (AI) is rapidly evolving, offering incredible potential for innovation and progress. However, with great power comes great risk, and a recent discovery by the Sysdig Threat Research Team (TRT) exposes one such risk.
Reportedly, researchers have discovered a novel cyberattack scheme, dubbed, LLMjacking, in which, threat actors gain access to the cloud environment and attempt to access local Large Language Models (LLMs) hosted by cloud providers.
In the blog post, Sysdig’s security researcher Alessandro Brucato explained that cybercriminals are targeting systems running outdated software using stolen cloud credentials, most likely obtained from compromised cloud accounts, to infiltrate systems running the LLMs to unlock the treasure trove of their capabilities.
According to researchers, before the release of their research, attackers had already accessed LLM models across ten different AI services, including Anthropic, AWS Bedrock, Google Cloud Vertex AI, Mistral, and OpenAI.
In one case, a local Claude (v2/v3) LLM model from Anthropic was targeted, where attackers breached a vulnerable Laravel Framework system, executing the intrusion by gaining access to Amazon Web Services (AWS) credentials through exploiting a vulnerability, CVE-2021-3129. An open-source Python script was used to access compromised accounts.
Researchers found that attackers are tampering with logging settings in compromised systems, indicating a deliberate attempt to evade detection while using stolen LLM access, highlighting the growing sophistication of cybercriminals.
“If undiscovered, this type of attack could result in over $46,000 of LLM consumption costs per day for the victim,” Brucato noted.
But what’s the motive?
Unlike traditional hacking focused on stealing data or disrupting operations, LLMjacking seems driven by profit. However, there’s a twist- researchers believe the attackers aren’t after the data stored within the LLMs themselves. Instead, they’re aiming to sell access to the AI models’ capabilities to other criminals.
That’s because no legitimate LLM queries were run during the verification phase, only determining the credentials’ capabilities and quotas. The keychecker integrates with oai-reverse-proxy, a reverse proxy server for LLM APIs, suggesting that attackers are providing access to compromised accounts without exposing the underlying credentials.
This discovery highlights the need for a multi-pronged approach to securing AI. Sysdig recommends implementing robust vulnerability and secrets management practices, along with Cloud Security Posture Management or Cloud Infrastructure Entitlement Management solutions, to minimize permissions and prevent unauthorized access.
RELATED TOPICS
- FraudGPT Chatbot Emerges for AI-Driven Cyber Crime
- AI Generated Fake Obituary Websites Target Grieving Users
- AI-Powered Scams Fuel Global Cybercrime Surge: INTERPOL
- WormGPT – Malicious ChatGPT Alternative Empowering Crooks
- Researchers Test Zero-click Worms that Exploit Generative AI Apps