Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The specified access entry resource is already in use on this cluster #3293

Open
1 task
rishabhToshniwal opened this issue Jan 31, 2025 · 2 comments
Open
1 task
Labels

Comments

@rishabhToshniwal
Copy link

rishabhToshniwal commented Jan 31, 2025

Description

I am getting below error on reapplying the terraform , the role "ms360-SetupRoleRev2023" is the machine IAM role on which I am running the terraform apply

Error: creating EKS Access Entry (accessentry-dev-eks:arn:aws:iam::xxxxxxx:role/ms360-SetupRoleRev2023): operation error EKS: CreateAccessEntry, https response error StatusCode: 409, RequestID: dd7be975-9cd5-4166-b126-d92e9be6e395, ResourceInUseException: The specified access entry resource is already in use on this cluster.

The plan shows:

# module.eks.aws_eks_access_entry.this["cluster_creator"] must be replaced
+/- resource "aws_eks_access_entry" "this" {
     ~ access_entry_arn  = "arn:aws:eks:eu-west-1:xxxxxx:access-entry/accessentry-dev-eks/role/xxxxxx/ms360-SetupRoleRev2023/50ca5dee-ce49-6aab-2249-7e664f4276e8" -> (known after apply)
     ~ created_at        = "2025-01-31T10:17:30Z" -> (known after apply)
      ~ id                = "accessentry-dev-eks:arn:aws:iam::xxxxxx:role/ms360-SetupRoleRev2023" -> (known after apply)
      ~ kubernetes_groups = [] -> (known after apply)
     ~ modified_at       = "2025-01-31T10:17:30Z" -> (known after apply)
      ~ principal_arn     = "arn:aws:iam::xxxxxx:role/ms360-SetupRoleRev2023" # **forces replacement** -> (known after apply) # forces replacement
        tags              = {
           "Creator"          = "Rishabh Toshniwal"
           "Environment_id"   = "12345"
           "Environment_name" = "accessentry-dev"
           "Environment_type" = "dev"
           "Expiration"       = "20.03.2021"
           "Owner"            = "stratosphere"
           "Product"          = "MS360"
           "Version"          = "6.0.0"
       }
     ~ user_name         = "arn:aws:sts::xxxxxx:assumed-role/ms360-SetupRoleRev2023/{{SessionName}}" -> (known after apply)
       # (3 unchanged attributes hidden)
   }
  • ✋ I have searched the open/closed issues and my issue is not listed.

⚠️ Note

Versions

  • Module version [Required]:

  • Terraform version:

  • Provider version(s):

Reproduction Code [Required]

    cluster_creator = {
      principal_arn = try(data.aws_iam_session_context.current[0].issuer_arn, "")
      type          = "STANDARD"

      policy_associations = {
        admin = {
          policy_arn = "arn:${local.partition}:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy"
          access_scope = {
            type = "cluster"
          }
        }
      }
    }
  }
  # Merge the bootstrap behavior with the entries that users provide
  merged_access_entries = merge(
    { for k, v in local.bootstrap_cluster_creator_admin_permissions : k => v if var.enable_cluster_creator_admin_permissions },
    var.access_entries,
  )

  # Flatten out entries and policy associations so users can specify the policy
  # associations within a single entry
  flattened_access_entries = flatten([
    for entry_key, entry_val in local.merged_access_entries : [
      for pol_key, pol_val in lookup(entry_val, "policy_associations", {}) :
      merge(
        {
          principal_arn = entry_val.principal_arn
          entry_key     = entry_key
          pol_key       = pol_key
        },
        { for k, v in {
          association_policy_arn              = pol_val.policy_arn
          association_access_scope_type       = pol_val.access_scope.type
          association_access_scope_namespaces = lookup(pol_val.access_scope, "namespaces", [])
        } : k => v if !contains(["EC2_LINUX", "EC2_WINDOWS", "FARGATE_LINUX", "HYBRID_LINUX"], lookup(entry_val, "type", "STANDARD")) },
      )
    ]
  ])`


resource "aws_eks_access_entry" "this" {
  for_each = { for k, v in local.merged_access_entries : k => v if var.create_eks }

  cluster_name      = aws_eks_cluster.this[0].id
  kubernetes_groups = try(each.value.kubernetes_groups, null)
  principal_arn     = each.value.principal_arn
  type              = try(each.value.type, "STANDARD")
  user_name         = try(each.value.user_name, null)

  tags = merge(var.tags, try(each.value.tags, {}))
    depends_on = [
    aws_eks_access_entry.this
  ]
}

resource "aws_eks_access_policy_association" "this" {
  for_each = { for k, v in local.flattened_access_entries : "${v.entry_key}_${v.pol_key}" => v if var.create_eks }

  access_scope {
    namespaces = try(each.value.association_access_scope_namespaces, [])
    type       = each.value.association_access_scope_type
  }

  cluster_name = aws_eks_cluster.this[0].id

  policy_arn    = each.value.association_policy_arn
  principal_arn = each.value.principal_arn

  depends_on = [
    aws_eks_access_entry.this
  ]
}

resource "aws_eks_access_entry" "this" {
  for_each = { for k, v in local.merged_access_entries : k => v if var.create_eks }

  cluster_name      = aws_eks_cluster.this[0].id
  kubernetes_groups = try(each.value.kubernetes_groups, null)
  principal_arn     = each.value.principal_arn
  type              = try(each.value.type, "STANDARD")
  user_name         = try(each.value.user_name, null)

  tags = merge(var.tags, try(each.value.tags, {}))
    depends_on = [
    aws_eks_access_entry.this
  ]
}

resource "aws_eks_access_policy_association" "this" {
  for_each = { for k, v in local.flattened_access_entries : "${v.entry_key}_${v.pol_key}" => v if var.create_eks }

  access_scope {
    namespaces = try(each.value.association_access_scope_namespaces, [])
    type       = each.value.association_access_scope_type
  }

  cluster_name = aws_eks_cluster.this[0].id

  policy_arn    = each.value.association_policy_arn
  principal_arn = each.value.principal_arn

  depends_on = [
    aws_eks_access_entry.this
  ]
}

Steps to reproduce the behavior:

Expected behavior

Actual behavior

Terminal Output Screenshot(s)

Additional context

@rishabhToshniwal
Copy link
Author

rishabhToshniwal commented Feb 25, 2025

Any update about this issue @bryantbiggs ? Since we are using data resource to get the IAM source role as mentioned here https://github.com/terraform-aws-modules/terraform-aws-eks/blob/1bfc10a5589afadf9c633e0417dbf67316e3e32b/main.tf#L224C2-L239C1 , I see the data resource results into Known After Apply on every run and hence we see below in the plan

+/- resource "aws_eks_access_entry" "this" {
      ~ access_entry_arn  = "arn:aws:eks:eu-west-1:xxx:access-entry/eksaccess-dev-eks/role/xxx/ms360-SetupRoleRev2023/9aca9e37-aaa3-7a72-ff48-2b310911c7f3" -> (known after apply)
      ~ created_at        = "2025-02-25T09:28:04Z" -> (known after apply)
      ~ id                = "eksaccess-dev-eks:arn:aws:iam::xxx:role/ms360-SetupRoleRev2023" -> (known after apply)
      ~ kubernetes_groups = [] -> (known after apply)
      ~ modified_at       = "2025-02-25T09:28:04Z" -> (known after apply)
      ~ principal_arn     = "arn:aws:iam::xxx:role/ms360-SetupRoleRev2023" # forces replacement -> (known after apply) # forces replacement
     
      ~ user_name         = "arn:aws:sts::xxx:assumed-role/ms360-SetupRoleRev2023/{{SessionName}}" -> (known after apply)
     # (3 unchanged attributes hidden)
   }```

@bryantbiggs
Copy link
Member

I think the error message returned from the API is quite clear - I would re-read the EKS documentation about what happens when you enable access entries on an existing cluster

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants