Step by step tutorial for who have no experience to Amazon EKS. After finished the tutorial, you should be able to run general workload with Amazon EKS. Hope you enjoy the journey.
Please note this tutorial is for demonstration purpose only, please DO NOT blindly apply it to your production environments.
- eksctl - The official CLI for Amazon EKS
- kubectl - The Kubernetes command-line tool
- helm - The Kubernetes Package Manage
- Amazon EKS 1.23 or higher - To support
autoscaling/v2
, learn more at KEP-2702.
Click here 🔍
-
To get support for Amazon EKS 1.31
- support have been added after eksctl-0.191.0 released.
-
To get support for Amazon EKS 1.30
- support have been added after eksctl-0.179.0 released.
-
To get support for Amazon EKS 1.29
- support have been added after eksctl-0.169.0 released.
-
To get support for Amazon EKS 1.28
- support have been added after eksctl-0.160.0 released.
-
To get support for Amazon EKS 1.27
- support have been added after eksctl-0.143.0 released.
-
To get support for Amazon EKS 1.26
- support have been added after eksctl-0.138.0 released.
-
To get support for Amazon EKS 1.25
- support have been added after eksctl-0.132.0 released.
-
To get support for Amazon EKS 1.24
- support have been added after eksctl-0.120.0 released.
-
To get support for Amazon EKS 1.23
- support have been added after eksctl-0.109.0 released.
-
To get support for Amazon EKS 1.22
- support have been added after eksctl-0.92.0 released.
- support have been removed after eksctl-0.151.0 released.
- Your AWS Profile have proper permission configured.
- All the tools required were setup properly
- All the resources are under
us-east-1
- The cluster name would be
eks-demo
- Goal 1: Create EKS Cluster with
eksctl
- Goal 2: Deploy nginx with Application Load Balancer (ALB)
- Goal 3: Find out why Application Load Balancer (ALB) not working?
- Goal 4: Find out why Horizontal Pod Autoscaling (HPA) not working?
- Goal 5: HPA is working. Now I want to set Nginx replicas with
kubectl scale ...
but failed. Why? - Goal 6: Remove HPA and try to scale to
20
manually - Goal 7: Try to turn ALB entry from HTTP to HTTPS
- Goal 8: How to switch to Network Load Balancer (NLB)?
- Goal 9: Cleanup
Make sure you have latest eksctl
installed and you should be able to create EKS cluster with minimal setup as follow.
% eksctl create cluster -f ./cluster-config/cluster-minimal.yaml
Click here to show sample deployment output 🔍
2024-XX-XX XX:XX:XX [ℹ] eksctl version 0.194.0
2024-XX-XX XX:XX:XX [ℹ] using region us-east-1
2024-XX-XX XX:XX:XX [ℹ] subnets for us-east-1a - public:192.168.0.0/19 private:192.168.64.0/19
2024-XX-XX XX:XX:XX [ℹ] subnets for us-east-1b - public:192.168.32.0/19 private:192.168.96.0/19
2024-XX-XX XX:XX:XX [ℹ] nodegroup "mng-1" will use "" [AmazonLinux2023/1.31]
2024-XX-XX XX:XX:XX [ℹ] using Kubernetes version 1.31
2024-XX-XX XX:XX:XX [ℹ] creating EKS cluster "eks-demo" in "us-east-1" region with Fargate profile and managed nodes
2024-XX-XX XX:XX:XX [ℹ] 1 nodegroup (mng-1) was included (based on the include/exclude rules)
2024-XX-XX XX:XX:XX [ℹ] will create a CloudFormation stack for cluster itself and 0 nodegroup stack(s)
2024-XX-XX XX:XX:XX [ℹ] will create a CloudFormation stack for cluster itself and 1 managed nodegroup stack(s)
2024-XX-XX XX:XX:XX [ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-east-1 --cluster=eks-demo'
2024-XX-XX XX:XX:XX [ℹ] Kubernetes API endpoint access will use provided values {publicAccess=true, privateAccess=true} for cluster "eks-demo" in "us-east-1"
2024-XX-XX XX:XX:XX [ℹ] configuring CloudWatch logging for cluster "eks-demo" in "us-east-1" (enabled types: api, audit, authenticator, controllerManager, scheduler & no types disabled)
2024-XX-XX XX:XX:XX [ℹ]
2 sequential tasks: { create cluster control plane "eks-demo",
2 sequential sub-tasks: {
4 sequential sub-tasks: {
1 task: { create addons },
wait for control plane to become ready,
update CloudWatch log retention,
create fargate profiles,
},
create managed nodegroup "mng-1",
}
}
2024-XX-XX XX:XX:XX [ℹ] building cluster stack "eksctl-eks-demo-cluster"
2024-XX-XX XX:XX:XX [ℹ] deploying stack "eksctl-eks-demo-cluster"
2024-XX-XX XX:XX:XX [ℹ] waiting for CloudFormation stack "eksctl-eks-demo-cluster"
2024-XX-XX XX:XX:XX [ℹ] creating addon
2024-XX-XX XX:XX:XX [ℹ] successfully created addon
2024-XX-XX XX:XX:XX [ℹ] creating addon
2024-XX-XX XX:XX:XX [ℹ] successfully created addon
2024-XX-XX XX:XX:XX [ℹ] "addonsConfig.autoApplyPodIdentityAssociations" is set to true; will lookup recommended pod identity configuration for "vpc-cni" addon
2024-XX-XX XX:XX:XX [ℹ] deploying stack "eksctl-eks-demo-addon-vpc-cni-podidentityrole-aws-node"
2024-XX-XX XX:XX:XX [ℹ] waiting for CloudFormation stack "eksctl-eks-demo-addon-vpc-cni-podidentityrole-aws-node"
2024-XX-XX XX:XX:XX [ℹ] creating addon
2024-XX-XX XX:XX:XX [ℹ] successfully created addon
2024-XX-XX XX:XX:XX [ℹ] creating addon
2024-XX-XX XX:XX:XX [ℹ] successfully created addon
2024-XX-XX XX:XX:XX [ℹ] set log retention to 90 days for CloudWatch logging
2024-XX-XX XX:XX:XX [ℹ] creating Fargate profile "karpenter" on EKS cluster "eks-demo"
2024-XX-XX XX:XX:XX [ℹ] created Fargate profile "karpenter" on EKS cluster "eks-demo"
2024-XX-XX XX:XX:XX [ℹ] building managed nodegroup stack "eksctl-eks-demo-nodegroup-mng-1"
2024-XX-XX XX:XX:XX [ℹ] deploying stack "eksctl-eks-demo-nodegroup-mng-1"
2024-XX-XX XX:XX:XX [ℹ] waiting for CloudFormation stack "eksctl-eks-demo-nodegroup-mng-1"
2024-XX-XX XX:XX:XX [ℹ] waiting for the control plane to become ready
2024-XX-XX XX:XX:XX [✔] saved kubeconfig as "/Users/demoUser/.kube/config"
2024-XX-XX XX:XX:XX [ℹ] no tasks
2024-XX-XX XX:XX:XX [✔] all EKS cluster resources for "eks-demo" have been created
2024-XX-XX XX:XX:XX [✔] created 0 nodegroup(s) in cluster "eks-demo"
2024-XX-XX XX:XX:XX [ℹ] nodegroup "mng-1" has 2 node(s)
2024-XX-XX XX:XX:XX [ℹ] node "ip-192-168-121-236.ec2.internal" is ready
2024-XX-XX XX:XX:XX [ℹ] node "ip-192-168-90-226.ec2.internal" is ready
2024-XX-XX XX:XX:XX [ℹ] waiting for at least 2 node(s) to become ready in "mng-1"
2024-XX-XX XX:XX:XX [ℹ] nodegroup "mng-1" has 2 node(s)
2024-XX-XX XX:XX:XX [ℹ] node "ip-192-168-121-236.ec2.internal" is ready
2024-XX-XX XX:XX:XX [ℹ] node "ip-192-168-90-226.ec2.internal" is ready
2024-XX-XX XX:XX:XX [✔] created 1 managed nodegroup(s) in cluster "eks-demo"
2024-XX-XX XX:XX:XX [ℹ] kubectl command should work with "/Users/demoUser/.kube/config", try 'kubectl get nodes'
2024-XX-XX XX:XX:XX [✔] EKS cluster "eks-demo" in "us-east-1" region is ready
Verify the EKS nodes are running.
% kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-192-168-121-236.ec2.internal Ready <none> 5m47s v1.30.2-eks-1552ad0
ip-192-168-90-226.ec2.internal Ready <none> 5m49s v1.30.2-eks-1552ad0
At this stage, you would need to have kubectl
installed. Then you should be able to create Deployment
, HPA
, Service
and Ingress
resources.
% kubectl apply -f ./examples/simple/
deployment.apps/nginx-deployment created
horizontalpodautoscaler.autoscaling/nginx-hpa created
ingress.networking.k8s.io/nginx-ingress created
service/nginx-service created
Make sure everything run as expected
% kubectl get pods,deployments,hpa,service,ingress
NAME READY STATUS RESTARTS AGE
pod/nginx-deployment-598bb489bf-c55jl 1/1 Running 0 35s
pod/nginx-deployment-598bb489bf-x86dd 1/1 Running 0 35s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-deployment 2/2 2 2 35s
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
horizontalpodautoscaler.autoscaling/nginx-hpa Deployment/nginx-deployment <unknown>/80% 2 10 2 34s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 16m
service/nginx-service NodePort 10.100.94.158 <none> 80:31251/TCP 34s
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress.networking.k8s.io/nginx-ingress alb entry1.example.com 80 35s
% kubectl get ingress nginx-ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
nginx-ingress alb entry1.example.com 80 50s # <-------- no address shown, why?
After fixing the issue, you should be able to see command output as follow,
% kubectl get ingress nginx-ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
nginx-ingress alb entry1.example.com k8s-default-XXX.REGION.elb.amazonaws.com 80 60s
Once the Load Balancer is created, you should be able to visit the application via the endpoint of load balancer with default HTTP
protocol.
% kubectl get hpa nginx-hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
nginx-hpa Deployment/nginx-deployment <unknown>/80% 2 10 2 68s
Did you aware that HPA is not working... why? 🤔
After you fixed the HPA issue, it should shown as follow
% kubectl get hpa nginx-hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
nginx-hpa Deployment/nginx-deployment 2%/80% 2 10 2 2m7s
% kubectl scale --replicas 12 deployment nginx-deployment
deployment.apps/nginx-deployment scaled
Why the Pod count not able to reach desired pod count 12
but quickly scale down back to 2
... why is that ?
% kubectl delete hpa nginx-hpa
horizontalpodautoscaler.autoscaling "nginx-hpa" deleted
% kubectl scale --replicas 40 deployment nginx-deployment
deployment.apps/nginx-deployment scaled
% kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 25/40 40 25 11m <-------- stock at "25/40" ...why?
% kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-598bb489bf-c55jl 1/1 Running 0 6m15s
nginx-deployment-598bb489bf-6tqb8 1/1 Running 0 13m
nginx-deployment-598bb489bf-c55jl 1/1 Running 0 6m15s
nginx-deployment-598bb489bf-d7pcg 0/1 Pending 0 6m15s # <-------- Pending
nginx-deployment-598bb489bf-fw52n 0/1 Pending 0 6m15s # <-------- Pending
nginx-deployment-598bb489bf-x86dd 0/1 Pending 0 6m15s # <-------- Pending
... (omitted)
Service with HTTP
is clearly unsafe, how to made it safe with HTTPS
?
If you solve can provision ALB then you should be able to create NLB as well. But how...? 🤔
Terminate all resources that we created earlier.
% kubectl delete -f ./examples/simple/ --ignore-not-found
Terminate the EKS cluster
% eksctl delete cluster -f ./cluster-config/cluster-minimal.yaml
OPTIONAL Cleanup IAM User/Role/Policy and Identity Provider (IdP) with care.
There's another repository with common used addons installation scripts: