Threat actors leveraging artificial intelligence tools have compressed the cloud attack lifecycle from hours to mere minutes, according to new findings from the Sysdig Threat Research Team (TRT).
In a November 2025 incident, adversaries escalated from initial credential theft to full administrative privileges in less than 10 minutes by using large language models (LLMs) to automate reconnaissance, generate malicious code, and execute real-time attack decisions.
The operation targeted an Amazon Web Services (AWS) environment, demonstrating how AI assistance has fundamentally transformed the speed and sophistication of cloud-based attacks.
The compromise began when attackers discovered valid AWS credentials stored in publicly accessible Simple Storage Service (S3) buckets containing Retrieval-Augmented Generation (RAG) data for AI models.
The compromised credentials belonged to an Identity and Access Management (IAM) user with read and write permissions on AWS Lambda and restricted access to Amazon Bedrock.
With the compromised IAM user possessing the ReadOnlyAccess policy, attackers conducted extensive reconnaissance across multiple AWS services, including Secrets Manager, Systems Manager, EC2, ECS, RDS, and CloudWatch.
The threat actors then exploited UpdateFunctionCode and UpdateFunctionConfiguration permissions on Lambda to inject malicious code into an existing function named EC2-init. After three iterative attempts, they successfully compromised an admin account named “frick” by creating new access keys.
Multiple indicators throughout the operation suggest the threat actor leveraged large language models for code generation. The Lambda script featured comprehensive exception handling, a 30-second timeout modification, and Serbian-language comments (“Kreiraj admin access key” meaning “Create admin access key”), suggesting the attacker’s origin.
Researchers identified several AI hallucinations,s including attempts to assume roles in fabricated AWS account IDs with sequential patterns (123456789012 and 210987654321), references to a non-existent GitHub repository, and session names like “claude-session” reflecting AI-assisted methodology.
The attackers demonstrated sophisticated persistence by distributing operations across 19 distinct AWS principals, including six different IAM roles across 14 sessions and five compromised IAM users. They created a backdoor user named “backdoor-admin” with the AdministratorAccess policy attached.
After confirming that model invocation logging was disabled, the attackers pivoted to LLMjacking operations targeting Amazon Bedrock. They invoked multiple foundation models, including Claude Sonnet 4, Claude Opus 4, Claude 3.5 Sonnet, DeepSeek R1, Llama 4 Scout, Amazon Nova Premier, and Amazon Titan Image Generator.
Researchers discovered a Terraform module designed to deploy a backdoor Lambda function that would generate Bedrock credentials and expose them through a publicly accessible Lambda URL requiring no authentication.
The attackers shifted focus to EC2 compute resources, querying over 1,300 Amazon Machine Images for deep learning applications. They successfully provisioned a p4d.24xlarge instance costing $32.77 per hour (approximately $23,600 monthly) with user data scripts to install CUDA, PyTorch, and a publicly accessible JupyterLab server on port 8888, providing backdoor access independent of AWS credentials.
The threat actors employed multiple defense evasion tactics including an IP rotator tool to change source addresses for each request, bypassing security measures that correlate operations from the same IP.
Organizations should implement least privilege principles for all IAM users and roles, restrict UpdateFunctionConfiguration and PassRole permissions, enable Lambda function versioning for immutable code records, ensure S3 buckets with sensitive data are not publicly accessible, enable model invocation logging for Amazon Bedrock, and monitor for IAM Access Analyzer enumeration activity.
As LLMs become increasingly sophisticated, attacks of this nature will likely become more common, requiring organizations to prioritize runtime detection capabilities and least-privilege enforcement to defend against this accelerating threat landscape.
| Attack Stage | Time to Execute | Key Techniques |
|---|---|---|
| Initial Access | < 1 minute | Credential theft from public S3 buckets containing RAG data |
| Reconnaissance | 2-3 minutes | Enumeration across 10+ AWS services using ReadOnlyAccess policy |
| Privilege Escalation | 4-5 minutes | Lambda code injection targeting admin user “frick” |
| Lateral Movement | 6-7 minutes | Compromise of 19 AWS principals via role assumption |
| LLMjacking | 8-9 minutes | Invocation of 9 foundation models on Amazon Bedrock |
| Resource Abuse | 9-10 minutes | p4d.24xlarge GPU instance provisioning ($32.77/hour) |
Indicators of Compromise
| IP address | VPN |
|---|---|
| 104.155.129[.]177 | Yes |
| 104.155.178[.]59 | Yes |
| 104.197.169[.]222 | Yes |
| 136.113.159[.]75 | Yes |
| 34.173.176[.]171 | Yes |
| 34.63.142[.]34 | Yes |
| 34.66.36[.]38 | Yes |
| 34.69.200[.]125 | Yes |
| 34.9.139[.]206 | Yes |
| 35.188.114[.]132 | Yes |
| 35.192.38[.]204 | Yes |
| 34.171.37[.]34 | Yes |
| 204.152.223[.]172 | Yes |
| 34.30.49[.]235 | Yes |
| 103.177.183[.]165 | No |
| 152.58.47[.]83 | No |
| 194.127.167[.]92 | No |
| 197.51.170[.]131 | No |
Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.
