Skip to content

Commit

Permalink
Merge pull request #209 from OWASP/feat/k8s-1.25
Browse files Browse the repository at this point in the history
Feat(#199): restricted PSS/PSA with K8s 1.25
  • Loading branch information
commjoen authored Mar 10, 2023
2 parents f2dbe50 + c4ba598 commit 7bbfe00
Show file tree
Hide file tree
Showing 20 changed files with 331 additions and 142 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/minikube-k8s-test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ jobs:
with:
minikube-version: 1.29.0
driver: docker
kubernetes-version: v1.23.12
kubernetes-version: v1.25.6
- name: test script
run: |
eval $(minikube docker-env)
Expand Down
4 changes: 2 additions & 2 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
# Terraform
kubeconfig_wrongsecrets-exercise-cluster
.terraform
.terraform.lock.hcl
.terraform*
.terraform
terraform.tfstate*
.terraform.tfstate*
aws/terraform.tfstate.*
aws/terraform.tfstate.backup
aws/.terraform.tfstate.lock.info
Expand Down
145 changes: 145 additions & 0 deletions aws/.terraform.lock.hcl

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

20 changes: 12 additions & 8 deletions aws/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -135,18 +135,18 @@ The documentation below is auto-generated to give insight on what's created via

| Name | Version |
|------|---------|
| <a name="provider_aws"></a> [aws](#provider\_aws) | ~> 4.1 |
| <a name="provider_http"></a> [http](#provider\_http) | ~> 3.1 |
| <a name="provider_random"></a> [random](#provider\_random) | ~> 3.0 |
| <a name="provider_aws"></a> [aws](#provider\_aws) | 4.58.0 |
| <a name="provider_http"></a> [http](#provider\_http) | 3.2.1 |
| <a name="provider_random"></a> [random](#provider\_random) | 3.4.3 |

## Modules

| Name | Source | Version |
|------|--------|---------|
| <a name="module_cluster_autoscaler_irsa_role"></a> [cluster\_autoscaler\_irsa\_role](#module\_cluster\_autoscaler\_irsa\_role) | terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks | ~> 5.9.0 |
| <a name="module_ebs_csi_irsa_role"></a> [ebs\_csi\_irsa\_role](#module\_ebs\_csi\_irsa\_role) | terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks | ~> 5.9.0 |
| <a name="module_eks"></a> [eks](#module\_eks) | terraform-aws-modules/eks/aws | 19.7.0 |
| <a name="module_load_balancer_controller_irsa_role"></a> [load\_balancer\_controller\_irsa\_role](#module\_load\_balancer\_controller\_irsa\_role) | terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks | ~> 5.9.0 |
| <a name="module_cluster_autoscaler_irsa_role"></a> [cluster\_autoscaler\_irsa\_role](#module\_cluster\_autoscaler\_irsa\_role) | terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks | ~> 5.11.2 |
| <a name="module_ebs_csi_irsa_role"></a> [ebs\_csi\_irsa\_role](#module\_ebs\_csi\_irsa\_role) | terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks | ~> 5.11.2 |
| <a name="module_eks"></a> [eks](#module\_eks) | terraform-aws-modules/eks/aws | 19.10.0 |
| <a name="module_load_balancer_controller_irsa_role"></a> [load\_balancer\_controller\_irsa\_role](#module\_load\_balancer\_controller\_irsa\_role) | terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks | ~> 5.11.2 |
| <a name="module_vpc"></a> [vpc](#module\_vpc) | terraform-aws-modules/vpc/aws | ~> 3.19.0 |

## Resources
Expand Down Expand Up @@ -190,7 +190,7 @@ The documentation below is auto-generated to give insight on what's created via
| Name | Description | Type | Default | Required |
|------|-------------|------|---------|:--------:|
| <a name="input_cluster_name"></a> [cluster\_name](#input\_cluster\_name) | The EKS cluster name | `string` | `"wrongsecrets-exercise-cluster"` | no |
| <a name="input_cluster_version"></a> [cluster\_version](#input\_cluster\_version) | The EKS cluster version to use | `string` | `"1.23"` | no |
| <a name="input_cluster_version"></a> [cluster\_version](#input\_cluster\_version) | The EKS cluster version to use | `string` | `"1.25"` | no |
| <a name="input_extra_allowed_ip_ranges"></a> [extra\_allowed\_ip\_ranges](#input\_extra\_allowed\_ip\_ranges) | Allowed IP ranges in addition to creator IP | `list(string)` | `[]` | no |
| <a name="input_region"></a> [region](#input\_region) | The AWS region to use | `string` | `"eu-west-1"` | no |
| <a name="input_state_bucket_arn"></a> [state\_bucket\_arn](#input\_state\_bucket\_arn) | ARN of the state bucket to grant access to the s3 user | `string` | n/a | yes |
Expand All @@ -199,6 +199,8 @@ The documentation below is auto-generated to give insight on what's created via

| Name | Description |
|------|-------------|
| <a name="output_cluster_autoscaler_role"></a> [cluster\_autoscaler\_role](#output\_cluster\_autoscaler\_role) | Cluster autoscaler role |
| <a name="output_cluster_autoscaler_role_arn"></a> [cluster\_autoscaler\_role\_arn](#output\_cluster\_autoscaler\_role\_arn) | Cluster autoscaler role arn |
| <a name="output_cluster_endpoint"></a> [cluster\_endpoint](#output\_cluster\_endpoint) | Endpoint for EKS control plane. |
| <a name="output_cluster_id"></a> [cluster\_id](#output\_cluster\_id) | The id of the cluster |
| <a name="output_cluster_name"></a> [cluster\_name](#output\_cluster\_name) | The EKS cluster name |
Expand All @@ -207,6 +209,8 @@ The documentation below is auto-generated to give insight on what's created via
| <a name="output_ebs_role_arn"></a> [ebs\_role\_arn](#output\_ebs\_role\_arn) | EBS CSI driver role |
| <a name="output_irsa_role"></a> [irsa\_role](#output\_irsa\_role) | The role name used in the IRSA setup |
| <a name="output_irsa_role_arn"></a> [irsa\_role\_arn](#output\_irsa\_role\_arn) | The role ARN used in the IRSA setup |
| <a name="output_load_balancer_controller_role"></a> [load\_balancer\_controller\_role](#output\_load\_balancer\_controller\_role) | Load balancer controller role |
| <a name="output_load_balancer_controller_role_arn"></a> [load\_balancer\_controller\_role\_arn](#output\_load\_balancer\_controller\_role\_arn) | Load balancer controller role arn |
| <a name="output_secrets_manager_secret_name"></a> [secrets\_manager\_secret\_name](#output\_secrets\_manager\_secret\_name) | The name of the secrets manager secret |
| <a name="output_state_bucket_name"></a> [state\_bucket\_name](#output\_state\_bucket\_name) | Terraform s3 state bucket name |
<!-- END OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
60 changes: 17 additions & 43 deletions aws/build-an-deploy-aws.sh
Original file line number Diff line number Diff line change
Expand Up @@ -43,14 +43,20 @@ CLUSTERNAME="$(terraform output -raw cluster_name)"
STATE_BUCKET="$(terraform output -raw state_bucket_name)"
IRSA_ROLE_ARN="$(terraform output -raw irsa_role_arn)"
EBS_ROLE_ARN="$(terraform output -raw ebs_role_arn)"
CLUSTER_AUTOSCALER_ROLE_ARN="$(terraform output -raw cluster_autoscaler_role_arn)"

echo "CLUSTERNAME=${CLUSTERNAME}"
echo "STATE_BUCKET=${STATE_BUCKET}"
echo "IRSA_ROLE_ARN=${IRSA_ROLE_ARN}"
echo "EBS_ROLE_ARN=${EBS_ROLE_ARN}"
echo "CLUSTER_AUTOSCALER_ROLE_ARN=${CLUSTER_AUTOSCALER_ROLE_ARN}"

version="$(uuidgen)"

aws eks update-kubeconfig --region $AWS_REGION --name $CLUSTERNAME --kubeconfig ~/.kube/wrongsecrets

export KUBECONFIG=~/.kube/wrongsecrets

echo "If the below output is different than expected: please hard stop this script (running aws sts get-caller-identity first)"

aws sts get-caller-identity
Expand All @@ -59,24 +65,6 @@ echo "Giving you 4 seconds before we add autoscaling"

sleep 4

# echo "Installing policies and service accounts"

# aws iam create-policy \
# --policy-name AmazonEKSClusterAutoscalerPolicy \
# --policy-document file://cluster-autoscaler-policy.json

# echo "Installing iamserviceaccount"

# eksctl create iamserviceaccount \
# --cluster=$CLUSTERNAME \
# --region=$AWS_REGION \
# --namespace=kube-system \
# --name=cluster-autoscaler \
# --role-name=AmazonEKSClusterAutoscalerRole \
# --attach-policy-arn=arn:aws:iam::${ACCOUNT_ID}:policy/AmazonEKSClusterAutoscalerPolicy \
# --override-existing-serviceaccounts \
# --approve

echo "Deploying the k8s autoscaler for eks through kubectl"

curl -o cluster-autoscaler-autodiscover.yaml https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml
Expand All @@ -86,8 +74,8 @@ kubectl apply -f cluster-autoscaler-autodiscover.yaml

echo "annotating service account for cluster-autoscaler"
kubectl annotate serviceaccount cluster-autoscaler \
-n kube-system \
eks.amazonaws.com/role-arn=${CLUSTER_AUTOSCALER}
-n kube-system --overwrite \
eks.amazonaws.com/role-arn=${CLUSTER_AUTOSCALER_ROLE_ARN}

kubectl patch deployment cluster-autoscaler \
-n kube-system \
Expand All @@ -105,6 +93,9 @@ else
helm upgrade --install -n kube-system csi-secrets-store secrets-store-csi-driver/secrets-store-csi-driver --set enableSecretRotation=true --set rotationPollInterval=60s
fi

echo "Patching default namespace"
kubectl apply -f k8s/workspace-psa.yml

echo "Install ACSP"
kubectl apply -f https://raw.githubusercontent.com/aws/secrets-store-csi-driver-provider-aws/main/deployment/aws-provider-installer.yaml

Expand Down Expand Up @@ -154,33 +145,16 @@ helm upgrade --install mj ../helm/wrongsecrets-ctf-party \
--set="balancer.env.REACT_APP_CREATE_TEAM_HMAC_KEY=${CREATE_TEAM_HMAC}" \
--set="balancer.cookie.cookieParserSecret=${COOKIE_PARSER_SECRET}"

# echo "Installing EBS CSI driver"
# eksctl create iamserviceaccount \
# --name ebs-csi-controller-sa \
# --namespace kube-system \
# --cluster $CLUSTERNAME \
# --attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \
# --approve \
# --role-only \
# --role-name AmazonEKS_EBS_CSI_DriverRole
# --region $AWS_REGION

# echo "managing EBS CSI Driver as a separate eks addon"
# eksctl create addon --name aws-ebs-csi-driver \
# --cluster $CLUSTERNAME \
# --service-account-role-arn arn:aws:iam::${ACCOUNT_ID}:role/AmazonEKS_EBS_CSI_DriverRole \
# --force \
# --region $AWS_REGION

# Install CTFd

echo "Installing CTFd"

export HELM_EXPERIMENTAL_OCI=1
kubectl create namespace ctfd

# Double base64 encoding to prevent weird character errors in ctfd
helm upgrade --install ctfd -n ctfd oci://ghcr.io/bman46/ctfd/ctfd \
--set="redis.auth.password=$(openssl rand -base64 24)" \
--set="mariadb.auth.rootPassword=$(openssl rand -base64 24)" \
--set="mariadb.auth.password=$(openssl rand -base64 24)" \
--set="mariadb.auth.replicationPassword=$(openssl rand -base64 24)" \
--set="redis.auth.password=$(openssl rand -base64 24 | base64)" \
--set="mariadb.auth.rootPassword=$(openssl rand -base64 24 | base64)" \
--set="mariadb.auth.password=$(openssl rand -base64 24 | base64)" \
--set="mariadb.auth.replicationPassword=$(openssl rand -base64 24 | base64)" \
--set="env.open.SECRET_KEY=test" # this key isn't actually necessary in a setup with CTFd
25 changes: 1 addition & 24 deletions aws/cleanup-aws-autoscaling-and-helm.sh
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ fi
ACCOUNT_ID=$(aws sts get-caller-identity | jq '.Account' -r)
echo "ACCOUNT_ID=${ACCOUNT_ID}"

kubectl delete -f k8s/wrongsecrets-balancer-ingress.yaml
kubectl delete -f k8s/wrongsecrets-balancer-ingress.yml
kubectl delete -f k8s/ctfd-ingress.yaml

sleep 5 # Give the controller some time to catch the ingress change
Expand All @@ -36,26 +36,3 @@ helm uninstall csi-secrets-store \
echo "Cleanup helm chart projectcalico"
helm uninstall calico \
-n default

echo "cleanup serviceaccont"
echo "Cleanup iam serviceaccount and policy"
eksctl delete iamserviceaccount \
--cluster $CLUSTERNAME \
--name cluster-autoscaler \
--namespace kube-system \
--region $AWS_REGION


sleep 5 # Prevents race condition - command below may error out because it's still 'attached'

aws iam delete-policy \
--policy-arn arn:aws:iam::${ACCOUNT_ID}:policy/AmazonEKSClusterAutoscalerPolicy


echo "Cleanup CSI driver SA"

eksctl delete iamserviceaccount \
--cluster $CLUSTERNAME \
--name ebs-csi-controller-sa \
--namespace kube-system \
--region $AWS_REGION
17 changes: 6 additions & 11 deletions aws/k8s-aws-alb-script-cleanup.sh
Original file line number Diff line number Diff line change
Expand Up @@ -28,21 +28,16 @@ echo "cleanup k8s ingress and service. This may take a while"
kubectl delete service wrongsecrets-balancer
kubectl delete ingress wrongsecrets-balancer

kubectl delete ingress ctfd -n ctfd

# Give some time for the controller to remove cleaned ingresses
sleep 5

echo "Cleanup helm chart"
helm uninstall aws-load-balancer-controller \
-n kube-system

echo "Cleanup k8s ALB"
kubectl delete -k "github.com/aws/eks-charts/stable/aws-load-balancer-controller//crds?ref=master"

echo "Cleanup iam serviceaccount and policy"
eksctl delete iamserviceaccount \
--cluster $CLUSTERNAME \
--name aws-load-balancer-controller \
--namespace kube-system \
--region $AWS_REGION

sleep 5 # Prevents race condition - command below may error out because it's still 'attached'

aws iam delete-policy \
--policy-arn arn:aws:iam::${ACCOUNT_ID}:policy/AWSLoadBalancerControllerIAMPolicy
kubectl delete serviceaccount -n kube-system aws-load-balancer-controller
Loading

0 comments on commit 7bbfe00

Please sign in to comment.