Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

aws_eks_cluster: support for adding additional roles into aws-auth configmap #12454

Closed
assafcoh opened this issue Mar 18, 2020 · 9 comments
Closed
Labels
enhancement Requests to existing resources that expand the functionality or scope. service/eks Issues and PRs that pertain to the eks service. upstream Addresses functionality related to the cloud provider.

Comments

@assafcoh
Copy link

assafcoh commented Mar 18, 2020

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Description

We have a jenkins server which assumes an aws role named "jenkins-role" and creates many eks clusters in several aws accounts and regions. By default Amazon allows only the eks creator (our jenkins-role) to login to the cluster and run kubectl commands.
Our teams use aws-iam-authenticator.exe and kubectl to manage the cluster.
We would like to be able to add "cluster specific" role to the aws-auth configmap, so that we can allow specific users (qa , dev, dev-ops) to assume these "cluster specific" roles, in order to login to specific cluster with kubectl.
Currently aws_eks_node_group creates a kubernetes configmap named "aws-auth" and adds only the worker node role specified in "node_role_arn" to this configmap.
It would be great if you could support one of the following suggestions:

  1. simple suggestion - specify another role name that will be added to the aws-auth map as part of
    the system:masters group. See suggestion 1 below.
  2. generic suggestion - specify a list of maps that will be added to aws-auth config map. Something
    like suggestion 2 below.

New or Affected Resource(s)

aws_eks_cluster and aws_eks_node_group

Potential Terraform Configuration

resource "aws_eks_cluster" "eks_cluster" {
  
  name = "example"
  version = "1.14"
  role_arn = "eample-role"

  ...

  # suggestion 1 (simple)
  aws-auth-system-masters_role_arn = aws_iam_role.example.arn,

  # suggestion 2 (generic)
  aws-auth-roles = [
    {
        role_arn= aws_iam_role.example.arn,
        user-name=kubectl-user
        groups = ["system:masters"]
    },
    {
        role-arn= aws_iam_role.example2.arn,
        username=kubectl-qa
        groups = ["system:masters"]
    }
]
}
@assafcoh assafcoh added the enhancement Requests to existing resources that expand the functionality or scope. label Mar 18, 2020
@ghost ghost added the service/eks Issues and PRs that pertain to the eks service. label Mar 18, 2020
@github-actions github-actions bot added the needs-triage Waiting for first response or review from a maintainer. label Mar 18, 2020
@taylorturner
Copy link

The lack of support or automation around this issue is pretty crazy to me. Seems like AWS expects you to either share the creds of the user creating EKS clusters or forces admins to manually update the configmap after every single cluster creation. We are using Terraform for a repeatable way to deploy a lot of EKS clusters and this auth issue is definitely a pain point right now.

@assafcoh assafcoh changed the title aws_eks_node_group : support for adding additional roles into aws-auth configmap aws_eks_cluster: support for adding additional roles into aws-auth configmap Mar 24, 2020
@assafcoh
Copy link
Author

by the way, we tried to workaround this by updating the aws-auth configmap as shown in the terraform code below, but we get an error that "aws-auth" config map already exists. This is because it was indeed already created by resource aws_eks_node_group (which added the worker node roles to the map)

resource "kubernetes_config_map" "aws_auth_configmap" {
  metadata {
    name = "aws-auth"
    namespace = "kube-system"
  }
  data = {
    mapRoles = <<YAML
- rolearn: ${aws_iam_role.eks_kubectl_role.arn}
  username: kubectl-access-user
  groups:
    - system:masters
YAML
  }

  depends_on = ["aws_eks_node_group.this"]
}

@taylorturner
Copy link

@assafcoh I figured it's either created then or whenever someone first authenticates with aws-iam-authenticator. In any case, by the time Terraform gets into the cluster it would've already been created. What is interesting is you can still do a kubectl apply -f on the configmap. It would be nice if Terraform provided a null_resource type where you can just feed it a Kubernetes manifest.

@assafcoh
Copy link
Author

assafcoh commented May 4, 2020

can someone please respond to this thread? support for this is greatly needed. thank you

@zot24
Copy link
Contributor

zot24 commented May 20, 2020

It might not help much but this is a good resource https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/aws_auth.tf that's how the community handle it, so basically they create the aws-auth config map before the node groups do so for you and everytime you run that plan (if needed) it will recreate the config map by gathering all the needed data and recreating the map_role field

Notice this line https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/node_groups.tf#L12 that will make the node group creation depends on the aws-auth config map so that's how they avoid the error you having with the config map being already created

Hope that helps somehow

@dalgibbard
Copy link

Thanks @zot24, I think you're right here. Im running only EKS Fargate (so no node groups), but I created the Fargate profile/policy first, and I think that causes a similar outcome :)

@zot24
Copy link
Contributor

zot24 commented May 20, 2020

jtlyk people I have opened a ticket which might be kind of related to this depending on how you view it, it's for having a data source for manage node groups so we can built that config map from the roles assigned to the created managed node groups #13442

@bflad bflad added upstream Addresses functionality related to the cloud provider. and removed needs-triage Waiting for first response or review from a maintainer. labels Jul 1, 2020
@bflad
Copy link
Contributor

bflad commented Jul 1, 2020

Hi folks 👋 Thank you for suggesting this and it would certainly be helpful to a lot of folks.

The EKS API does not appear to support this type of functionality at this time in the CreateCluster or similar API calls. Since the AWS service API doesn't directly support it and the Terraform AWS Provider's boundaries are typically the AWS Go SDK for those service APIs, this unfortunately leaves us in a position where this is not something we would add to this particular codebase at this time. Terraform modules and other Terraform providers, such as the Terraform Kubernetes Provider, are options in these cases. If the AWS service API does implement this type of functionality in the future, we would be happy to take a fresh look into implementing it! Until then, I'm going to close this issue as there is no action for us to take right now.

That being said, there are a few potential paths forward for everyone here in the meantime. There is the potential configuration solution mentioned above and the EKS product team's roadmap does include some similar issues which can be subscribed/upvoted:

Outside those public forums, submitting AWS Support feature requests are generally also a good signal to AWS service teams about missing functionality. Hope this helps!

@bflad bflad closed this as completed Jul 1, 2020
@ghost
Copy link

ghost commented Aug 2, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!

@ghost ghost locked and limited conversation to collaborators Aug 2, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement Requests to existing resources that expand the functionality or scope. service/eks Issues and PRs that pertain to the eks service. upstream Addresses functionality related to the cloud provider.
Projects
None yet
Development

No branches or pull requests

5 participants