Cloud security remains an evolving challenge as new attack vectors emerge, often leveraging misconfigurations rather than outright software vulnerabilities. In August 2024, researchers at Datadog Security Labs uncovered a novel name confusion attack that exploits a misconfiguration in the way Amazon Machine Images (AMIs) are retrieved within AWS environments. This attack, dubbed whoAMI, enables adversaries to execute arbitrary code within AWS accounts by manipulating AMI selection mechanisms.
This report provides an in-depth analysis of the whoAMI attack, its underlying mechanisms, real-world impact, mitigation strategies, and best practices for cloud security professionals.
Understanding the whoAMI Attack
The Role of AMIs in AWS Infrastructure
Amazon Machine Images (AMIs) are fundamental components in AWS environments, serving as pre-configured virtual machine templates used to launch EC2 instances. Organizations utilize AMIs for provisioning infrastructure efficiently, whether through publicly available images, community-contributed images, or proprietary private images.
Typically, AWS users searching for the latest OS-based AMI—such as Ubuntu or Amazon Linux—leverage the DescribeImages API to dynamically fetch the most recent image. However, the whoAMI attack exploits a specific misconfiguration in how these AMIs are retrieved.
Attack Mechanism: Exploiting Name Confusion
The whoAMI attack is a variant of a name confusion attack, which is a subset of supply chain attacks. Similar to dependency confusion attacks, this attack tricks misconfigured systems into selecting a malicious resource instead of a legitimate one. However, instead of software dependencies, the targeted resource is a virtual machine image (AMI).
The attack works as follows:
- The Vulnerability
- Organizations often use AWS APIs or Infrastructure-as-Code (IaC) tools like Terraform to fetch AMIs dynamically.
- A common method is to search for AMIs using wildcard patterns (e.g., fetching the latest
Ubuntu-20.04
AMI). - The API call often omits the “owner” parameter, allowing results to include both official and third-party AMIs.
- Adversary’s Exploitation
- An attacker publishes a malicious AMI with a name that matches an expected pattern (e.g.,
ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*
). - They timestamp it to be the most recent version, ensuring that any query retrieving the latest AMI unknowingly selects the attacker’s image.
- Once the victim’s infrastructure instantiates an EC2 instance using the compromised AMI, the attacker gains execution capabilities within the AWS account.
- An attacker publishes a malicious AMI with a name that matches an expected pattern (e.g.,
- Potential Consequences
- Code Execution & Lateral Movement: Attackers can execute malicious payloads upon instance launch.
- Persistent Access: AMIs can be backdoored, enabling persistent unauthorized access.
- Credential Theft: If AWS IAM roles and secrets are exposed within the instance, attackers can pivot into broader AWS services.
- Supply Chain Compromise: Cloud workloads built on compromised AMIs introduce systemic vulnerabilities across organizations.
Real-World Impact: AWS’s Own Infrastructure Was Vulnerable
One of the most alarming findings of the whoAMI research was that even AWS’s internal non-production systems were vulnerable to this attack.
AWS Internal Vulnerability
Datadog Security Labs conducted a controlled experiment to determine whether AWS itself was susceptible to this flaw. They:
- Published two benign AMIs designed to mimic official Amazon Linux images.
- Observed tens of thousands of API requests retrieving their images, indicating AWS internal services were fetching AMIs in an insecure manner.
AWS promptly addressed this issue after disclosure, emphasizing that only non-production systems were affected and that no customer data was at risk. This incident, however, highlights the pervasiveness of this vulnerability and its potential for large-scale exploitation.
Extent of Affected Systems
- Datadog estimated that approximately 1% of AWS-using organizations were at risk.
- Thousands of AWS accounts could be unknowingly deploying compromised AMIs.
Exploitation Demonstration and Affected Open-Source Projects
Example of Vulnerable Code
A typical Terraform configuration that fails to specify an AMI owner might look like this:
hclCopyEditdata "aws_ami" "ubuntu" {
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"]
}
}
In this case, Terraform fetches the most recent AMI matching the filter, regardless of who published it.
Affected Open-Source Tools
During their research, Datadog found several publicly accessible repositories with vulnerable AMI retrieval patterns. One notable example is awslabs/aws-simple-ec2-cli, an AWS-maintained tool designed to simplify EC2 instance management. The tool contained hardcoded wildcard AMI queries without verifying AMI ownership, making it susceptible to the whoAMI attack.
Detection and Mitigation Strategies
To prevent exploitation, cloud security teams should implement robust AMI selection controls and proactively scan their environments for vulnerabilities.
Detection: How to Identify Vulnerable Code
- Codebase Audits
- Use GitHub code search to identify AMI retrieval queries that omit owner verification.
- Search for patterns like
ec2:DescribeImages
without “owners” filters.
- Automated Scanning with Semgrep
- Security teams can leverage Semgrep rules to detect unsafe AMI selection logic.
- Cloud SIEM and AWS CloudTrail Monitoring
- Implement Cloud SIEM rules to flag suspicious AMI retrievals.
- Regularly review CloudTrail logs for unexpected AMI launches.
Mitigation: Securing AMI Retrieval Processes
- Utilize AWS’s Allowed AMIs Feature
- In response to this attack, AWS introduced Allowed AMIs in December 2024.
- This feature enables users to create an allow list of trusted AMI providers, restricting deployments to verified sources.
- Explicitly Specify AMI Owners
- When querying AMIs, always specify trusted owner IDs (e.g., AWS, Canonical, Microsoft).
- Example of a secure API query:bashCopyEdit
aws ec2 describe-images --owners "137112412989"
- This ensures that only Amazon-owned AMIs are retrieved.
- Use Datadog’s whoAMI-Scanner
- Datadog released an open-source security scanner to detect and prevent untrusted AMI usage.
- Security Awareness and Policy Enforcement
- Cloud security teams should educate developers about the risks of untrusted AMIs.
- Organizations should enforce IaC security policies through tools like AWS Config, Open Policy Agent (OPA), and Terraform Sentinel.
Conclusion
The whoAMI attack highlights a critical supply chain risk in cloud environments, demonstrating how seemingly benign misconfigurations can have severe security implications. While AWS has introduced Allowed AMIs as a mitigation, the responsibility ultimately lies with organizations to enforce secure cloud practices.
Security teams must actively monitor and secure AMI selection processes, ensuring that infrastructure deployments rely on trusted and verified resources. By adopting proactive security controls, automated detection mechanisms, and best practices for AMI retrieval, organizations can effectively mitigate the risks of name confusion attacks.
Key Takeaways for Cybersecurity Professionals
✅ Misconfigured AMI queries can lead to remote code execution risks.
✅ AWS itself was vulnerable but has since implemented mitigations.
✅ Explicitly specifying AMI owners in API calls is a critical best practice.
✅ AWS Allowed AMIs feature should be enabled for enhanced security.
✅ Organizations must adopt automated scanning tools to detect unsafe AMI selection.
Information security specialist, currently working as risk infrastructure specialist & investigator.
15 years of experience in risk and control process, security audit support, business continuity design and support, workgroup management and information security standards.