You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
If you are interested in working on this issue or have submitted a pull request, please leave a comment
Tell us about your request I would like EKS to accept an additional optional parameter during cluster creation that specifies the ARN of the IAM role of the worker nodes I will be joining to the cluster.
Which service(s) is this request for? EKS
Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?
It's a fairly ordinary problem - after standing up the control plane in EKS, I need to join worker nodes to the cluster. Adding a field to the API that allows me to specify the worker node ARN would let me remove some boilerplate from Terraform manifests (see below).
Are you currently working around this issue?
(NOTE: I am demoing via Terraform but this is not a Terraform-specific issue.)
My current workaround via Terraform is to run a series of local commands:
This requires that aws and kubectl also be available in the place where I am running Terraform. Note that I can't manage this using Terraform's native Kubernetes resources because the Kubernetes provider needs to be bootstrapped using an aws_eks_cluster_auth resource, which is not currently possible to do in a single apply cycle in Terraform (I would need to run at least two deploy cycles, the first one of which would write out the kubeconfig to disk and then the second one of which would read that kubeconfig to bootstrap the Kubernetes provider.
If EKS provides a native way to specify what I am denoting as aws_iam_role.eks-worker-node-instance-role.arn in my Terraform manifest, I could just drop all of this and materialize the control plane using a structure like
I would also like the ability to specify any roles/users to add to the mapRoles/mapUsers section of aws-auth during cluster creation.
This would make it much easier to bootstrap any scripts (Terraform, etc) without making them incredibly reliant on being run by the exact role/user that originally created the EKS cluster.
This should be extended to just moving the auth map management in general to the AWS api rather than requiring it to be created via Kubernetes. Something like the following:
Another use case is race conditions when creating resources via Terraform. If you create the Fargate Profiles before creating the auth map Terraform will be unable to manage the auth map. This extends a bit deeper and causes issues with the recommended way of managing EKS by keeping it in a different Terraform state.
Community Note
Tell us about your request I would like EKS to accept an additional optional parameter during cluster creation that specifies the ARN of the IAM role of the worker nodes I will be joining to the cluster.
Which service(s) is this request for? EKS
Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?
It's a fairly ordinary problem - after standing up the control plane in EKS, I need to join worker nodes to the cluster. Adding a field to the API that allows me to specify the worker node ARN would let me remove some boilerplate from Terraform manifests (see below).
Are you currently working around this issue?
(NOTE: I am demoing via Terraform but this is not a Terraform-specific issue.)
My current workaround via Terraform is to run a series of local commands:
This requires that
aws
andkubectl
also be available in the place where I am running Terraform. Note that I can't manage this using Terraform's native Kubernetes resources because the Kubernetes provider needs to be bootstrapped using an aws_eks_cluster_auth resource, which is not currently possible to do in a single apply cycle in Terraform (I would need to run at least two deploy cycles, the first one of which would write out the kubeconfig to disk and then the second one of which would read that kubeconfig to bootstrap the Kubernetes provider.If EKS provides a native way to specify what I am denoting as
aws_iam_role.eks-worker-node-instance-role.arn
in my Terraform manifest, I could just drop all of this and materialize the control plane using a structure likeThe text was updated successfully, but these errors were encountered: