Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[EKS] [request]: add parameter at cluster creation for worker IAM role #727

Closed
erhudy opened this issue Jan 27, 2020 · 3 comments
Closed
Labels
EKS Amazon Elastic Kubernetes Service Proposed Community submitted issue

Comments

@erhudy
Copy link

erhudy commented Jan 27, 2020

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Tell us about your request I would like EKS to accept an additional optional parameter during cluster creation that specifies the ARN of the IAM role of the worker nodes I will be joining to the cluster.

Which service(s) is this request for? EKS

Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?
It's a fairly ordinary problem - after standing up the control plane in EKS, I need to join worker nodes to the cluster. Adding a field to the API that allows me to specify the worker node ARN would let me remove some boilerplate from Terraform manifests (see below).

Are you currently working around this issue?
(NOTE: I am demoing via Terraform but this is not a Terraform-specific issue.)

My current workaround via Terraform is to run a series of local commands:

locals {
  config_map_aws_auth = <<CONFIGMAPAWSAUTH
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    - rolearn: ${aws_iam_role.eks-worker-node-instance-role.arn}
      username: system:node:{{EC2PrivateDNSName}}
      groups:
        - system:bootstrappers
        - system:nodes
CONFIGMAPAWSAUTH
}

resource "local_file" "write-aws-auth" {
  content  = local.config_map_aws_auth
  filename = var.aws_auth_cm_filename
}

resource "null_resource" "set-up-kubeconfig" {
  depends_on = [aws_eks_cluster.eks]
  triggers = {
    upstream_id = local_file.write-aws-auth.id
  }
  provisioner "local-exec" {
    command = "aws eks update-kubeconfig --kubeconfig ${var.aws_kubeconfig_filename} --name ${aws_eks_cluster.eks.name}"
  }
}

resource "null_resource" "add-aws-auth" {
  depends_on = [local_file.write-aws-auth, null_resource.set-up-kubeconfig]
  triggers = {
    upstream_id = aws_eks_cluster.eks.id
  }
  provisioner "local-exec" {
    command = "kubectl --request-timeout=300s --kubeconfig ${var.aws_kubeconfig_filename} apply --timeout=300s -f ${local_file.write-aws-auth.filename}"
  }
}

This requires that aws and kubectl also be available in the place where I am running Terraform. Note that I can't manage this using Terraform's native Kubernetes resources because the Kubernetes provider needs to be bootstrapped using an aws_eks_cluster_auth resource, which is not currently possible to do in a single apply cycle in Terraform (I would need to run at least two deploy cycles, the first one of which would write out the kubeconfig to disk and then the second one of which would read that kubeconfig to bootstrap the Kubernetes provider.

If EKS provides a native way to specify what I am denoting as aws_iam_role.eks-worker-node-instance-role.arn in my Terraform manifest, I could just drop all of this and materialize the control plane using a structure like

resource "aws_eks_cluster" "eks" {
  name     = var.cluster-name
  role_arn = aws_iam_role.eks-cluster-iam-role.arn
  node_role_arn = aws_iam_role.eks-worker-node-instance-role.arn
  vpc_config {
    security_group_ids = [aws_security_group.nodegroup-sg.id]
    subnet_ids         = concat(aws_subnet.eks-private-subnets.*.id, aws_subnet.eks-public-subnets.*.id)
  }
}
@erhudy erhudy added the Proposed Community submitted issue label Jan 27, 2020
@mikestef9 mikestef9 added the EKS Amazon Elastic Kubernetes Service label Apr 9, 2020
@ddollar
Copy link

ddollar commented May 5, 2020

I would also like the ability to specify any roles/users to add to the mapRoles/mapUsers section of aws-auth during cluster creation.

This would make it much easier to bootstrap any scripts (Terraform, etc) without making them incredibly reliant on being run by the exact role/user that originally created the EKS cluster.

@nwsparks
Copy link

nwsparks commented Oct 6, 2021

This should be extended to just moving the auth map management in general to the AWS api rather than requiring it to be created via Kubernetes. Something like the following:

resource "aws_eks_cluster" "eks" {
  name     = var.cluster-name
  role_arn = aws_iam_role.eks-cluster-iam-role.arn
  
  auth_map_role_arns = [aws_iam_role.eks-worker-node-instance-role.arn]
  auth_map_user_arns = [""]
  
  vpc_config {
    security_group_ids = [aws_security_group.nodegroup-sg.id]
    subnet_ids         = concat(aws_subnet.eks-private-subnets.*.id, aws_subnet.eks-public-subnets.*.id)
  }
}

Another use case is race conditions when creating resources via Terraform. If you create the Fargate Profiles before creating the auth map Terraform will be unable to manage the auth map. This extends a bit deeper and causes issues with the recommended way of managing EKS by keeping it in a different Terraform state.

@mikestef9
Copy link
Contributor

Addressed with #185

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
EKS Amazon Elastic Kubernetes Service Proposed Community submitted issue
Projects
Status: Shipped
Development

No branches or pull requests

4 participants