New research reveals that threat actors are exploiting exposed cloud credentials to hijack enterprise AI systems within minutes of credential leakage. Recent incidents have demonstrated that attackers can compromise large language model (LLM) infrastructure in under 19 minutes.
Dubbed LLMjacking, this attack vector targets non-human identities (NHIs) – API keys, service accounts, and machine credentials – to bypass traditional security controls and monetize stolen generative AI access.
The LLMjacking Kill Chain
Security firm Entro Labs recently exposed functional AWS keys across GitHub, Pastebin, and Reddit to study attacker behavior.
Their research uncovered a systematic four-phase attack pattern:
Credential Harvesting: Automated bots scan public repositories and forums using Python scripts to detect valid credentials, with 44% of NHIs exposed via code repositories and collaboration platforms.
Rapid Validation: Attackers performed initial API calls like GetCostAndUsage within 9-17 minutes of exposure to assess account value, avoiding predictable calls like GetCallerIdentity to evade detection.
Model Enumeration: Intruders executed GetFoundationModelAvailability requests via AWS Bedrock to catalog accessible LLMs – including Anthropic’s Claude and Amazon Titan – mapping available attack surfaces.
Exploitation: Automated InvokeModel attempts targeted compromised endpoints, with researchers observing 1,200+ unauthorized inference attempts per hour across experimental keys.
The Storm-2139 cybercrime group recently weaponized this methodology against Microsoft Azure AI customers, exfiltrating API keys to generate dark web content. Forensic logs show attackers:
- Leveraged Python’s requests library for credential validation
- Used aws s3 ls commands to identify AI/ML buckets
- Attempted bedrock: InvokeModel with crafted prompts to bypass content filters
Entro’s simulated breach revealed attackers combining automated scripts with manual reconnaissance – 63% of initial accesses used Python SDKs, while 37% employed Firefox user agents for interactive exploration via AWS console.
Uncontained LLMjacking poses existential risks:
- Cost Exploitation: A single compromised NHI with Bedrock access could incur $46,000/day in unauthorized inference charges.
- Data Exfiltration: Attackers exfiltrated model configurations and training data metadata during 22% of observed incidents.
- Reputational Damage: Microsoft’s Q1 2025 breach saw threat actors generate 14,000+ deepfake images using stolen Azure OpenAI keys.
Mitigation Strategies
- Detect & monitor NHIs in real-time
- Implement automated secret rotation
- Enforce least privilege
- Monitor unusual API activity
- Educate developers on secure NHI management
With attackers operationalizing leaks in under 20 minutes, real-time secret scanning and automated rotation are no longer optional safeguards but critical survival mechanisms in the LLM era.
Are you from SOC/DFIR Teams? – Analyse Malware Incidents & get live Access with ANY.RUN -> Start Now for Free.