From 90a2b0eeccf86ba4a34a9f8f584ba25874238495 Mon Sep 17 00:00:00 2001 From: Phantom-Intruder Date: Wed, 8 Nov 2023 13:47:32 +0530 Subject: [PATCH 01/10] Added considerations --- EKS101/what-is-eks.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/EKS101/what-is-eks.md b/EKS101/what-is-eks.md index 792745f6..236a7e40 100644 --- a/EKS101/what-is-eks.md +++ b/EKS101/what-is-eks.md @@ -52,6 +52,10 @@ kubectl get nodes and if you can see the 2 nodes, then you are all set. +## Additional considerations + +Now that you have the entire cluster running on AWS, there are some things you may want to tweak to your liking. Firstly is the security group. While eksctl creates a default security group that has all the permissions needed to run your EKS cluster, it's best if you go back in and take another look at it. Firstly, ensure that your inbound rules do not allow 0.0.0.0, which would allow all external IPs to connect to your EKS ports. Instead, only allow IPs that you want to access your cluster through. You can do this by specifying the proper CIDR ranges and their associated ports. On the other hand, with outbound ports, allowing 0.0.0.0 is fine since this allows your cluster to communicate with any resource from outside your network. + ## Cleaning up Now, remember that all of the above things are AWS resources, and as such, you will be charged if you leave them running without deleting them after you are done. So this means you have a bunch of stuff (VPCs, cluster, EC2 instances) that you have to get rid of, which would have been a pain if you had to do it manually. However, since eksctl created all these resources for you, it can also get rid of all these resources for you, in the same manner, using a single command: From 53638aa8b498b79ed6bc7c14897fffd24d4b2876 Mon Sep 17 00:00:00 2001 From: Phantom-Intruder Date: Thu, 9 Nov 2023 13:34:08 +0530 Subject: [PATCH 02/10] Adding considerations --- EKS101/what-is-eks.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/EKS101/what-is-eks.md b/EKS101/what-is-eks.md index 236a7e40..a9dce918 100644 --- a/EKS101/what-is-eks.md +++ b/EKS101/what-is-eks.md @@ -56,6 +56,8 @@ and if you can see the 2 nodes, then you are all set. Now that you have the entire cluster running on AWS, there are some things you may want to tweak to your liking. Firstly is the security group. While eksctl creates a default security group that has all the permissions needed to run your EKS cluster, it's best if you go back in and take another look at it. Firstly, ensure that your inbound rules do not allow 0.0.0.0, which would allow all external IPs to connect to your EKS ports. Instead, only allow IPs that you want to access your cluster through. You can do this by specifying the proper CIDR ranges and their associated ports. On the other hand, with outbound ports, allowing 0.0.0.0 is fine since this allows your cluster to communicate with any resource from outside your network. +The next thing you can look at is the nodegroups. Since you specified `t2.micro` in the above command, your nodegroups will be created with that machine type. You can use the AWS console to add nodegroups with specific tolerations so that only certain pods get scheduled on these nodes. You can read more about taints and tolerations in the [Scheduler101 section](../Scheduler101/Nodes_taints_and_tolerations.md). You can also check the Kubernetes version that is used in your cluster from here. If you followed the above tutorial, you will have a cluster with Kubernetes version 1.24. You can update this version from the console. However, note that a lot of things vary from version to version, and you might end up getting something in your existing application broken if you blindly update your Kubernetes version. + ## Cleaning up Now, remember that all of the above things are AWS resources, and as such, you will be charged if you leave them running without deleting them after you are done. So this means you have a bunch of stuff (VPCs, cluster, EC2 instances) that you have to get rid of, which would have been a pain if you had to do it manually. However, since eksctl created all these resources for you, it can also get rid of all these resources for you, in the same manner, using a single command: From a149a5b96156ffcb2bcc9c52c540e154581c771c Mon Sep 17 00:00:00 2001 From: Phantom-Intruder Date: Fri, 10 Nov 2023 11:54:34 +0530 Subject: [PATCH 03/10] Additional considerations --- EKS101/what-is-eks.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/EKS101/what-is-eks.md b/EKS101/what-is-eks.md index a9dce918..cfcf2e5c 100644 --- a/EKS101/what-is-eks.md +++ b/EKS101/what-is-eks.md @@ -56,7 +56,7 @@ and if you can see the 2 nodes, then you are all set. Now that you have the entire cluster running on AWS, there are some things you may want to tweak to your liking. Firstly is the security group. While eksctl creates a default security group that has all the permissions needed to run your EKS cluster, it's best if you go back in and take another look at it. Firstly, ensure that your inbound rules do not allow 0.0.0.0, which would allow all external IPs to connect to your EKS ports. Instead, only allow IPs that you want to access your cluster through. You can do this by specifying the proper CIDR ranges and their associated ports. On the other hand, with outbound ports, allowing 0.0.0.0 is fine since this allows your cluster to communicate with any resource from outside your network. -The next thing you can look at is the nodegroups. Since you specified `t2.micro` in the above command, your nodegroups will be created with that machine type. You can use the AWS console to add nodegroups with specific tolerations so that only certain pods get scheduled on these nodes. You can read more about taints and tolerations in the [Scheduler101 section](../Scheduler101/Nodes_taints_and_tolerations.md). You can also check the Kubernetes version that is used in your cluster from here. If you followed the above tutorial, you will have a cluster with Kubernetes version 1.24. You can update this version from the console. However, note that a lot of things vary from version to version, and you might end up getting something in your existing application broken if you blindly update your Kubernetes version. +The next thing you can look at is the nodegroups. Since you specified `t2.micro` in the above command, your nodegroups will be created with that machine type. You can use the AWS console to add nodegroups with specific tolerations so that only certain pods get scheduled on these nodes. You can read more about taints and tolerations in the [Scheduler101 section](../Scheduler101/Nodes_taints_and_tolerations.md). You can also check the Kubernetes version that is used in your cluster from here. If you followed the above tutorial, you will have a cluster with Kubernetes version 1.24. You can update this version from the console. However, note that a lot of things vary from version to version, and you might end up getting something in your existing application broken if you blindly update your Kubernetes version. On the topic of updating, you will also notice an AMI version that is mentioned per each node group. Since you created this cluster recently, you will have the latest AMI version. However, AMIs get updated around twice each month, and while there won't be any major issues if you don't keep your AMIs updated, it is good to update as frequently as possible. Unlike updating the Kubernetes version, AMI updates are relatively safe since they only update the OS to have the latest packages specified by the AWS team. The update can be performed either as a rolling update, or a forced update. A rolling update will create a new node with the new AMI version and move all the pods in the old node to the new node before the old pods are drained and the old node is deleted. A forced update will immediately destroy the old node and start up a new node. The advantage of this method is that it is much faster and will always complete successfully, whereas a rolling update will take much longer and may fail to finish the update if any pods fail to drain. ## Cleaning up From fdcf2d7ea7ef36c317d6cc257b0e45fdab8483c7 Mon Sep 17 00:00:00 2001 From: Phantom-Intruder Date: Sun, 12 Nov 2023 12:37:48 +0530 Subject: [PATCH 04/10] Cost tagging --- EKS101/what-is-eks.md | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/EKS101/what-is-eks.md b/EKS101/what-is-eks.md index cfcf2e5c..7a8035c7 100644 --- a/EKS101/what-is-eks.md +++ b/EKS101/what-is-eks.md @@ -56,7 +56,11 @@ and if you can see the 2 nodes, then you are all set. Now that you have the entire cluster running on AWS, there are some things you may want to tweak to your liking. Firstly is the security group. While eksctl creates a default security group that has all the permissions needed to run your EKS cluster, it's best if you go back in and take another look at it. Firstly, ensure that your inbound rules do not allow 0.0.0.0, which would allow all external IPs to connect to your EKS ports. Instead, only allow IPs that you want to access your cluster through. You can do this by specifying the proper CIDR ranges and their associated ports. On the other hand, with outbound ports, allowing 0.0.0.0 is fine since this allows your cluster to communicate with any resource from outside your network. -The next thing you can look at is the nodegroups. Since you specified `t2.micro` in the above command, your nodegroups will be created with that machine type. You can use the AWS console to add nodegroups with specific tolerations so that only certain pods get scheduled on these nodes. You can read more about taints and tolerations in the [Scheduler101 section](../Scheduler101/Nodes_taints_and_tolerations.md). You can also check the Kubernetes version that is used in your cluster from here. If you followed the above tutorial, you will have a cluster with Kubernetes version 1.24. You can update this version from the console. However, note that a lot of things vary from version to version, and you might end up getting something in your existing application broken if you blindly update your Kubernetes version. On the topic of updating, you will also notice an AMI version that is mentioned per each node group. Since you created this cluster recently, you will have the latest AMI version. However, AMIs get updated around twice each month, and while there won't be any major issues if you don't keep your AMIs updated, it is good to update as frequently as possible. Unlike updating the Kubernetes version, AMI updates are relatively safe since they only update the OS to have the latest packages specified by the AWS team. The update can be performed either as a rolling update, or a forced update. A rolling update will create a new node with the new AMI version and move all the pods in the old node to the new node before the old pods are drained and the old node is deleted. A forced update will immediately destroy the old node and start up a new node. The advantage of this method is that it is much faster and will always complete successfully, whereas a rolling update will take much longer and may fail to finish the update if any pods fail to drain. +The next thing you can look at is the nodegroups. Since you specified `t2.micro` in the above command, your nodegroups will be created with that machine type. You can use the AWS console to add nodegroups with specific tolerations so that only certain pods get scheduled on these nodes. You can read more about taints and tolerations in the [Scheduler101 section](../Scheduler101/Nodes_taints_and_tolerations.md). You can also check the Kubernetes version that is used in your cluster from here. If you followed the above tutorial, you will have a cluster with Kubernetes version 1.24. You can update this version from the console. However, note that a lot of things vary from version to version, and you might end up getting something in your existing application broken if you blindly update your Kubernetes version. + +On the topic of updating, you will also notice an AMI version that is mentioned per each node group. Since you created this cluster recently, you will have the latest AMI version. However, AMIs get updated around twice each month, and while there won't be any major issues if you don't keep your AMIs updated, it is good to update as frequently as possible. Unlike updating the Kubernetes version, AMI updates are relatively safe since they only update the OS to have the latest packages specified by the AWS team. The update can be performed either as a rolling update, or a forced update. A rolling update will create a new node with the new AMI version and move all the pods in the old node to the new node before the old pods are drained and the old node is deleted. A forced update will immediately destroy the old node and start up a new node. The advantage of this method is that it is much faster and will always complete successfully, whereas a rolling update will take much longer and may fail to finish the update if any pods fail to drain. + +Another thing to consider is cost tagging. In a large organization, you would have multiple AWS resources that contribute to a large bill that you get at the end of the month. Usually, teams involved in costing would want to know exactly where the costs come from. If you were dealing with a resource such as an EC2 instance, you would not have to look deeply into this as you can just go into the cost explorer, filter by service, and just ask for the cost of the EC2 instances which would give you an exact amount on how much you spend on the resources. However, this becomes much more complicated with the EKS cluster. Not only do you have EC2 instances running in EKS clusters, but you are also paying for the control plane. Additionally, you also pay for EC2 resources such as load balancers and data transfer, along with a host of other things. To fully capture the total cost of your EKS cluster, you must use [cost allocation tags](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html) ## Cleaning up From 67a0ad7c2cdf0898db81a49d4935a174a259cc4c Mon Sep 17 00:00:00 2001 From: Phantom-Intruder Date: Mon, 13 Nov 2023 11:19:09 +0530 Subject: [PATCH 05/10] Additional considerations --- EKS101/what-is-eks.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/EKS101/what-is-eks.md b/EKS101/what-is-eks.md index 7a8035c7..8913b279 100644 --- a/EKS101/what-is-eks.md +++ b/EKS101/what-is-eks.md @@ -62,6 +62,8 @@ On the topic of updating, you will also notice an AMI version that is mentioned Another thing to consider is cost tagging. In a large organization, you would have multiple AWS resources that contribute to a large bill that you get at the end of the month. Usually, teams involved in costing would want to know exactly where the costs come from. If you were dealing with a resource such as an EC2 instance, you would not have to look deeply into this as you can just go into the cost explorer, filter by service, and just ask for the cost of the EC2 instances which would give you an exact amount on how much you spend on the resources. However, this becomes much more complicated with the EKS cluster. Not only do you have EC2 instances running in EKS clusters, but you are also paying for the control plane. Additionally, you also pay for EC2 resources such as load balancers and data transfer, along with a host of other things. To fully capture the total cost of your EKS cluster, you must use [cost allocation tags](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html) +First, go to your EKS cluster on the AWS console and add a tag with a value. Next, head over to each of your node groups and add the same tag-value pair to them. You can also use the same tags on any EC2 instances that have been spun up by the node group, but if your cluster scales down and comes back up at a later point, this will create brand new EC2 instances that won't have the tag on them. Therefore it is better to head over to the autocale groups section in your AWS console, select the group that corresponds to your EKS cluster, and add the tags there. Also, make sure you select the option to have the tags automatically added onto any new EC2 instances that get spun up by the ASG. + ## Cleaning up Now, remember that all of the above things are AWS resources, and as such, you will be charged if you leave them running without deleting them after you are done. So this means you have a bunch of stuff (VPCs, cluster, EC2 instances) that you have to get rid of, which would have been a pain if you had to do it manually. However, since eksctl created all these resources for you, it can also get rid of all these resources for you, in the same manner, using a single command: From e0d357ed8bc435c03ffe8b6188fae0fca49c9e5e Mon Sep 17 00:00:00 2001 From: Phantom-Intruder Date: Tue, 14 Nov 2023 12:45:38 +0530 Subject: [PATCH 06/10] Additional considerations --- EKS101/what-is-eks.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/EKS101/what-is-eks.md b/EKS101/what-is-eks.md index 8913b279..6b4e0e52 100644 --- a/EKS101/what-is-eks.md +++ b/EKS101/what-is-eks.md @@ -64,6 +64,8 @@ Another thing to consider is cost tagging. In a large organization, you would ha First, go to your EKS cluster on the AWS console and add a tag with a value. Next, head over to each of your node groups and add the same tag-value pair to them. You can also use the same tags on any EC2 instances that have been spun up by the node group, but if your cluster scales down and comes back up at a later point, this will create brand new EC2 instances that won't have the tag on them. Therefore it is better to head over to the autocale groups section in your AWS console, select the group that corresponds to your EKS cluster, and add the tags there. Also, make sure you select the option to have the tags automatically added onto any new EC2 instances that get spun up by the ASG. +Next, take a look at the IAM role that is used by the cluster in the overview section. eksctl would have already given you the ideal level of permissions in the IAM role, so there is not much you would want to remove from here. However, if you want to allow your cluster to access any additional items, you should add those permissions at this point. The networking section shows you information about the network your cluster is in, including the IPv4 range, subnets, and security group. You can also manage access to the cluster endpoint from here. + ## Cleaning up Now, remember that all of the above things are AWS resources, and as such, you will be charged if you leave them running without deleting them after you are done. So this means you have a bunch of stuff (VPCs, cluster, EC2 instances) that you have to get rid of, which would have been a pain if you had to do it manually. However, since eksctl created all these resources for you, it can also get rid of all these resources for you, in the same manner, using a single command: From 6dde1b367a8d6814fa36920c0fd127176f0362dc Mon Sep 17 00:00:00 2001 From: Phantom-Intruder Date: Wed, 15 Nov 2023 13:03:16 +0530 Subject: [PATCH 07/10] Additional considerations --- EKS101/what-is-eks.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/EKS101/what-is-eks.md b/EKS101/what-is-eks.md index 6b4e0e52..2679c0a0 100644 --- a/EKS101/what-is-eks.md +++ b/EKS101/what-is-eks.md @@ -56,7 +56,7 @@ and if you can see the 2 nodes, then you are all set. Now that you have the entire cluster running on AWS, there are some things you may want to tweak to your liking. Firstly is the security group. While eksctl creates a default security group that has all the permissions needed to run your EKS cluster, it's best if you go back in and take another look at it. Firstly, ensure that your inbound rules do not allow 0.0.0.0, which would allow all external IPs to connect to your EKS ports. Instead, only allow IPs that you want to access your cluster through. You can do this by specifying the proper CIDR ranges and their associated ports. On the other hand, with outbound ports, allowing 0.0.0.0 is fine since this allows your cluster to communicate with any resource from outside your network. -The next thing you can look at is the nodegroups. Since you specified `t2.micro` in the above command, your nodegroups will be created with that machine type. You can use the AWS console to add nodegroups with specific tolerations so that only certain pods get scheduled on these nodes. You can read more about taints and tolerations in the [Scheduler101 section](../Scheduler101/Nodes_taints_and_tolerations.md). You can also check the Kubernetes version that is used in your cluster from here. If you followed the above tutorial, you will have a cluster with Kubernetes version 1.24. You can update this version from the console. However, note that a lot of things vary from version to version, and you might end up getting something in your existing application broken if you blindly update your Kubernetes version. +The next thing you can look at is the node groups. Since you specified `t2.micro` in the above command, your nodegroups will be created with that machine type. You can use the AWS console to add node groups with specific tolerations so that only certain pods get scheduled on these nodes. You can read more about taints and tolerations in the [Scheduler101 section](../Scheduler101/Nodes_taints_and_tolerations.md). You can also check the Kubernetes version that is used in your cluster from here. If you follow the above tutorial, you will have a cluster with Kubernetes version 1.24. You can update this version from the console. However, note that a lot of things vary from version to version, and you might end up getting something in your existing application broken if you blindly update your Kubernetes version. However, updating the Kubernetes version is certainly important as AWS ends standard support for older Kubernetes versions (after a generous grace period). After this, the version enters extended support for another year during which support is subject to additional fees. On the topic of updating, you will also notice an AMI version that is mentioned per each node group. Since you created this cluster recently, you will have the latest AMI version. However, AMIs get updated around twice each month, and while there won't be any major issues if you don't keep your AMIs updated, it is good to update as frequently as possible. Unlike updating the Kubernetes version, AMI updates are relatively safe since they only update the OS to have the latest packages specified by the AWS team. The update can be performed either as a rolling update, or a forced update. A rolling update will create a new node with the new AMI version and move all the pods in the old node to the new node before the old pods are drained and the old node is deleted. A forced update will immediately destroy the old node and start up a new node. The advantage of this method is that it is much faster and will always complete successfully, whereas a rolling update will take much longer and may fail to finish the update if any pods fail to drain. @@ -66,6 +66,8 @@ First, go to your EKS cluster on the AWS console and add a tag with a value. Nex Next, take a look at the IAM role that is used by the cluster in the overview section. eksctl would have already given you the ideal level of permissions in the IAM role, so there is not much you would want to remove from here. However, if you want to allow your cluster to access any additional items, you should add those permissions at this point. The networking section shows you information about the network your cluster is in, including the IPv4 range, subnets, and security group. You can also manage access to the cluster endpoint from here. +The add-ons section allows you to get add-ons for your EKS cluster from the AWS marketplace, and the observability section is where you would enable CloudWatch container insights to get metrics and reports on your containers. Of course, if you wanted to go beyond what AWS was providing, you could go for tools such as Prometheus that give you better fine-grained control as well as better cross-platform integration. + ## Cleaning up Now, remember that all of the above things are AWS resources, and as such, you will be charged if you leave them running without deleting them after you are done. So this means you have a bunch of stuff (VPCs, cluster, EC2 instances) that you have to get rid of, which would have been a pain if you had to do it manually. However, since eksctl created all these resources for you, it can also get rid of all these resources for you, in the same manner, using a single command: From 5c837b895cba54c08d758440dcf7cd4c21b83dbf Mon Sep 17 00:00:00 2001 From: Phantom-Intruder Date: Thu, 16 Nov 2023 13:30:03 +0530 Subject: [PATCH 08/10] Nodegroups --- EKS101/what-is-eks.md | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/EKS101/what-is-eks.md b/EKS101/what-is-eks.md index 2679c0a0..3f4fbe6d 100644 --- a/EKS101/what-is-eks.md +++ b/EKS101/what-is-eks.md @@ -66,7 +66,7 @@ First, go to your EKS cluster on the AWS console and add a tag with a value. Nex Next, take a look at the IAM role that is used by the cluster in the overview section. eksctl would have already given you the ideal level of permissions in the IAM role, so there is not much you would want to remove from here. However, if you want to allow your cluster to access any additional items, you should add those permissions at this point. The networking section shows you information about the network your cluster is in, including the IPv4 range, subnets, and security group. You can also manage access to the cluster endpoint from here. -The add-ons section allows you to get add-ons for your EKS cluster from the AWS marketplace, and the observability section is where you would enable CloudWatch container insights to get metrics and reports on your containers. Of course, if you wanted to go beyond what AWS was providing, you could go for tools such as Prometheus that give you better fine-grained control as well as better cross-platform integration. +The add-ons section allows you to get add-ons for your EKS cluster from the AWS marketplace, and the observability section is where you would enable CloudWatch container insights to get metrics and reports on your containers. Of course, if you wanted to go beyond what AWS was providing, you could go for tools such as Prometheus that give you better fine-grained control as well as better cross-platform integration. With that, we have covered pretty much every additional thing you can do with your EKS cluster. ## Cleaning up @@ -98,7 +98,11 @@ eksctl create cluster --fargate One thing to note is that running your containers on Fargate means that you will not have any control over the infrastructure that it runs on since all that is managed by AWS. So if you need the environment the container runs in to be specific, EC2 instances are still your best option, so you might want to start considering Nodegroups. -Your Kubernetes cluster consists of nodes, and nodegroups, as the name implies, groups the nodes together. You can group several nodes into a single group in a way that makes logical sense, and have the nodegroup automatically manage itself. So you will still be using EC2 instances, but the Nodegroup will be creating, provisioning, and deleting the instances as needed. However, some features that Fargate offers such as scaling will no longer be available to you. So we can consider it a good middle group between manageability and flexibility. +## Node groups + +Your Kubernetes cluster consists of nodes, and nodegroups, as the name implies, groups the nodes together. You can group several nodes into a single group in a way that makes logical sense and have the node group automatically manage itself. So you will still be using EC2 instances, but the Nodegroup will be creating, provisioning, and deleting the instances as needed. In short, it handles scaling as required by the resources in your cluster. This is especially important if your cluster doesn't have a steady workload throughout the day. For instance, if the amount of resources used in the peak of the day is around 3 or 4 times the number of resources used during off-peak hours, you can create a node group with a minimum of 1 node and a maximum of 4 nodes, which means that depending on load, EKS will automatically scale between the required resources. + +However, some features that Fargate offers such as scaling will no longer be available to you. So we can consider it a good middle group between manageability and flexibility. As one last thing, before we finish, I would like to point out that another possibility is to have both Fargate and EC2 instances running to work for the same cluster. That is, you can create EC2 instances for the nodes that you need fine-grained control over while allowing Fargate to handle any other infrastructure that just needs to run, no matter how or where. From f46a083193018b7a5d32f7e6103db7a608b43e08 Mon Sep 17 00:00:00 2001 From: Phantom-Intruder Date: Fri, 17 Nov 2023 12:27:03 +0530 Subject: [PATCH 09/10] Nodegroups --- EKS101/what-is-eks.md | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/EKS101/what-is-eks.md b/EKS101/what-is-eks.md index 3f4fbe6d..42c42aa0 100644 --- a/EKS101/what-is-eks.md +++ b/EKS101/what-is-eks.md @@ -100,7 +100,11 @@ One thing to note is that running your containers on Fargate means that you will ## Node groups -Your Kubernetes cluster consists of nodes, and nodegroups, as the name implies, groups the nodes together. You can group several nodes into a single group in a way that makes logical sense and have the node group automatically manage itself. So you will still be using EC2 instances, but the Nodegroup will be creating, provisioning, and deleting the instances as needed. In short, it handles scaling as required by the resources in your cluster. This is especially important if your cluster doesn't have a steady workload throughout the day. For instance, if the amount of resources used in the peak of the day is around 3 or 4 times the number of resources used during off-peak hours, you can create a node group with a minimum of 1 node and a maximum of 4 nodes, which means that depending on load, EKS will automatically scale between the required resources. +Your Kubernetes cluster consists of nodes, and nodegroups, as the name implies, groups the nodes together. You can group several nodes into a single group in a way that makes logical sense and have the node group automatically manage itself. So you will still be using EC2 instances, but the Nodegroup will be creating, provisioning, and deleting the instances as needed. In short, it handles scaling as required by the resources in your cluster. This is especially important if your cluster doesn't have a steady workload throughout the day. For instance, if the amount of resources used in the peak of the day is around 3 or 4 times the number of resources used during off-peak hours, you can create a node group with a minimum of 1 node and a maximum of 4 nodes, which means that depending on load, EKS will automatically scale between the required resources. This helps you save costs without sacrificing performance. However, you will notice that EKS already does all this. By default, you already have a node group up and running, so why would you want multiple groups? + +This is where node taints and tolerations come in. You probably know what taints and tolerations are, and how nodes can be created that tolerate certain taints that pods have, thereby allowing them to schedule those pods. The same concept applies here, except now you get to apply tolerations to entire node groups. Once a node group has a toleration applied, any nodes that are created from this node group will have the tolerations applied to it. + +Additionally, this is a vital part of more complex autoscaling (for example, if you were using an autoscaler like [KEDA](../Keda101/what-is-keda.md)). However, some features that Fargate offers such as scaling will no longer be available to you. So we can consider it a good middle group between manageability and flexibility. From 2da44efc6e37f6f6fc73ccea84d8b3c844dabdeb Mon Sep 17 00:00:00 2001 From: Phantom-Intruder Date: Mon, 20 Nov 2023 13:23:14 +0530 Subject: [PATCH 10/10] EKS extended finished --- EKS101/what-is-eks.md | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/EKS101/what-is-eks.md b/EKS101/what-is-eks.md index 42c42aa0..b60e9370 100644 --- a/EKS101/what-is-eks.md +++ b/EKS101/what-is-eks.md @@ -102,9 +102,7 @@ One thing to note is that running your containers on Fargate means that you will Your Kubernetes cluster consists of nodes, and nodegroups, as the name implies, groups the nodes together. You can group several nodes into a single group in a way that makes logical sense and have the node group automatically manage itself. So you will still be using EC2 instances, but the Nodegroup will be creating, provisioning, and deleting the instances as needed. In short, it handles scaling as required by the resources in your cluster. This is especially important if your cluster doesn't have a steady workload throughout the day. For instance, if the amount of resources used in the peak of the day is around 3 or 4 times the number of resources used during off-peak hours, you can create a node group with a minimum of 1 node and a maximum of 4 nodes, which means that depending on load, EKS will automatically scale between the required resources. This helps you save costs without sacrificing performance. However, you will notice that EKS already does all this. By default, you already have a node group up and running, so why would you want multiple groups? -This is where node taints and tolerations come in. You probably know what taints and tolerations are, and how nodes can be created that tolerate certain taints that pods have, thereby allowing them to schedule those pods. The same concept applies here, except now you get to apply tolerations to entire node groups. Once a node group has a toleration applied, any nodes that are created from this node group will have the tolerations applied to it. - -Additionally, this is a vital part of more complex autoscaling (for example, if you were using an autoscaler like [KEDA](../Keda101/what-is-keda.md)). +This is where node taints and tolerations come in. You probably know what taints and tolerations are, and how nodes can be created that tolerate certain taints that pods have, thereby allowing them to schedule those pods. The same concept applies here, except now you get to apply tolerations to entire node groups. Once a node group has toleration applied, any nodes that are created from this node group will have the tolerations applied to it. This is a vital part of more complex autoscaling (for example, if you were using an autoscaler like [KEDA](../Keda101/what-is-keda.md)). If you are running multiple KEDA-scaled jobs, you would not want to schedule all of the applications on the same node group. This could lead to resource starvation for some nodes while some other resources will use too many resources. To counter this, you could create a node group per application and use taints and tolerations to make sure that any jobs that start in an application only get allocated to their specified node. However, some features that Fargate offers such as scaling will no longer be available to you. So we can consider it a good middle group between manageability and flexibility.