-
Notifications
You must be signed in to change notification settings - Fork 9.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
aws_eks_cluster: support for adding additional roles into aws-auth configmap #12454
Comments
The lack of support or automation around this issue is pretty crazy to me. Seems like AWS expects you to either share the creds of the user creating EKS clusters or forces admins to manually update the configmap after every single cluster creation. We are using Terraform for a repeatable way to deploy a lot of EKS clusters and this auth issue is definitely a pain point right now. |
by the way, we tried to workaround this by updating the aws-auth configmap as shown in the terraform code below, but we get an error that "aws-auth" config map already exists. This is because it was indeed already created by resource aws_eks_node_group (which added the worker node roles to the map)
|
@assafcoh I figured it's either created then or whenever someone first authenticates with |
can someone please respond to this thread? support for this is greatly needed. thank you |
It might not help much but this is a good resource https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/aws_auth.tf that's how the community handle it, so basically they create the Notice this line https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/node_groups.tf#L12 that will make the node group creation depends on the Hope that helps somehow |
Thanks @zot24, I think you're right here. Im running only EKS Fargate (so no node groups), but I created the Fargate profile/policy first, and I think that causes a similar outcome :) |
jtlyk people I have opened a ticket which might be kind of related to this depending on how you view it, it's for having a data source for manage node groups so we can built that config map from the roles assigned to the created managed node groups #13442 |
Hi folks 👋 Thank you for suggesting this and it would certainly be helpful to a lot of folks. The EKS API does not appear to support this type of functionality at this time in the CreateCluster or similar API calls. Since the AWS service API doesn't directly support it and the Terraform AWS Provider's boundaries are typically the AWS Go SDK for those service APIs, this unfortunately leaves us in a position where this is not something we would add to this particular codebase at this time. Terraform modules and other Terraform providers, such as the Terraform Kubernetes Provider, are options in these cases. If the AWS service API does implement this type of functionality in the future, we would be happy to take a fresh look into implementing it! Until then, I'm going to close this issue as there is no action for us to take right now. That being said, there are a few potential paths forward for everyone here in the meantime. There is the potential configuration solution mentioned above and the EKS product team's roadmap does include some similar issues which can be subscribed/upvoted:
Outside those public forums, submitting AWS Support feature requests are generally also a good signal to AWS service teams about missing functionality. Hope this helps! |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks! |
Community Note
Description
We have a jenkins server which assumes an aws role named "jenkins-role" and creates many eks clusters in several aws accounts and regions. By default Amazon allows only the eks creator (our jenkins-role) to login to the cluster and run kubectl commands.
Our teams use aws-iam-authenticator.exe and kubectl to manage the cluster.
We would like to be able to add "cluster specific" role to the aws-auth configmap, so that we can allow specific users (qa , dev, dev-ops) to assume these "cluster specific" roles, in order to login to specific cluster with kubectl.
Currently aws_eks_node_group creates a kubernetes configmap named "aws-auth" and adds only the worker node role specified in "node_role_arn" to this configmap.
It would be great if you could support one of the following suggestions:
the system:masters group. See suggestion 1 below.
like suggestion 2 below.
New or Affected Resource(s)
aws_eks_cluster and aws_eks_node_group
Potential Terraform Configuration
The text was updated successfully, but these errors were encountered: