
Cybersecurity researchers have created a new type of name confusion attack called Whoami, which allows people to publish Amazon Machine Images (AMIs) with specific names that can obtain code execution within Amazon Web Services (AWS) accounts. It is disclosed.
“If executed at scale, this attack can be used to gain access to thousands of accounts,” Datadog Security Labs researcher Seth Art said in a report shared with Hacker News. . “The vulnerable patterns are found in many private and open source code repositories.”
At its heart, this attack is a subset of supply chain attacks that involves exposing malicious resources and tricking software that is misconfigured to use the wrong software in place of a legitimate counterpart. .

This attack takes advantage of the fact that AMIs can have an AMI referencing the AMI. This takes advantage of the virtual machine images used to boot up AWS elastic computing cloud (EC2) instances, community catalogs, and the fact that developers can omit to mention “-owner” “Attribute when searching for one via EC2: descriptionimages api.
Put another way, a name confusing attack requires that the victim meet the following three conditions when obtaining an AMI ID via the API –
Get the most recently created images from the returned list of matching images because I couldn’t specify either the name filter use, owner, owner ALIA, or owner and ID parameters (” most_recent = true”)
This leads to a scenario where an attacker can create a malicious AMI with a name that matches the pattern specified in the search criteria, allowing the threat actor to create an EC2 instance.
This gives the instance the Remote Code Execution (RCE) capability, allowing threat actors to initiate various post-exploitation actions.
All the attackers need is an AWS account to publish the background AMI to the public community AMI catalog and select a name that matches the AMIS that the target is looking for.
“The latter is very similar to dependency confusion attacks, whereas malicious resources are software dependencies (such as PIP packages), but with Whoami-name confusion attacks, there is a malicious resource is a virtual machine image,” Art said.
Datadog is vulnerable to public examples of code written in Python, Go, Go, Java, Terraform, Pulumi and Bash Shell, with around 1% of organizations being monitored by the company being affected by Woami attacks. He said he found it using criteria.
Following the responsible disclosure on September 16, 2024, the issue was addressed by Amazon three days later. When asked for comment, AWS told Hacker News it couldn’t find any evidence that the technique was abused in the wild.
“All AWS services operate as designed. Based on extensive log analysis and monitoring, our study shows that the techniques described in this study have no evidence of use by other parties. “We have confirmed that it is being carried out only by the authorized researchers themselves.”

“This technique can affect customers who obtain Amazon Machine Image (AMI) ID via EC2: explain the API without specifying the value of the owner. In December 2024, Introducing AMIS, a new account-wide setting that allows customers to limit discovery, and using AMIS within AWS accounts.
As of November last year, Hashicorp Terraform began issuing warnings to users if “most_recent = true” is used without owner filters for Terraform-provider-aws version 5.77.0. Warning diagnostics are expected to be upgraded to Error Effects Version 6.0.0.
Source link