Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[EKS] [request]: Manage IAM identity cluster access with EKS API #185

Closed
ayosec opened this issue Mar 5, 2019 · 104 comments
Closed

[EKS] [request]: Manage IAM identity cluster access with EKS API #185

ayosec opened this issue Mar 5, 2019 · 104 comments
Labels
EKS Amazon Elastic Kubernetes Service Proposed Community submitted issue

Comments

@ayosec
Copy link

ayosec commented Mar 5, 2019

Tell us about your request

CloudFormation resources to register IAM roles in the aws-auth ConfigMap.

Which service(s) is this request for?
EKS

Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?

A Kubernetes cluster managed by EKS is able to authenticate users with IAM roles. This is very useful to grant access to Lambda functions. However, as described in the documentation, every IAM role has to be registered manually in a ConfigMap with the name aws-auth.

For every IAM role we add to the CloudFormation stack, we have to add an entry like this:

mapRoles: |
  - rolearn: "arn:aws:iam::11223344:role/stack-FooBarFunction-AABBCCDD"
    username: lambdafoo
    groups:
      - system:masters
  - ...

This process is a bit tedious, and it is hard to automate.

It will be much better if those IAM roles can be registered directly in the CloudFormation template. For example, with something like this:

LambdaKubeUser:
  Type: AWS::EKS::MapRoles::Entry
  Properties:
    Cluster: !Ref EKSCluster
    RoleArn: !GetAtt FunctionRole.Arn
    UserName: lambdafoo
    Groups:
      - system:masters

Thus, CloudFormation will add and remove entries in the ConfigMap as necessary, with no extra manual steps.

Another AWS::EKS::MapUsers::Entry can be used to register IAM users in mapUsers.

With this addition, we can automate the extra step to register the IAM role of the worker nodes when a new EKS instance is created:

NodeInstanceKubeUser:
  Type: AWS::EKS::MapRoles::Entry
  Properties:
    Cluster: !Ref EKSCluster
    RoleArn: !GetAtt NodeInstanceRole.Arn
    UserName: system:node:{{EC2PrivateDNSName}}
    Groups:
      - system:bootstrappers
      - system:nodes
@ayosec ayosec added the Proposed Community submitted issue label Mar 5, 2019
@tabern tabern added the EKS Amazon Elastic Kubernetes Service label Mar 29, 2019
@abelmokadem
Copy link

abelmokadem commented Apr 29, 2019

@ayosec have you created something to automate this as of now? I'm running into this when setting up a cluster using CloudFormation. Do you mind sharing your current approach?

@ayosec
Copy link
Author

ayosec commented Apr 29, 2019

have you created something to automate this as of now?

Unfortunately, no. I haven't found a reliable way to do it 100% automatic.

Do you mind sharing your current approach?

My current approach is to generate the ConfigMap using a template:

  1. All relevant ARNs are available in the outputs of the stack.
  2. A Ruby script reads those outputs, and fills a template.
  3. Finally, the generated YAML is applied with kubectl apply -f -.

@dardanbekteshi
Copy link

Adding this feature on CloudFormation would allow the same feature to be added on AWS CDK. This will greatly simplify the process of adding/removing new nodes, for example.

@willejs
Copy link

willejs commented Jul 2, 2019

I also thought about this. An api to manage the config map for aws-iam-authenticator is interesting, i think would be a bit clunky. I am using terraform to create an eks cluster, and this approach is alot nicer
terraform-aws-modules/terraform-aws-eks#355

@inductor
Copy link

I'd love this

@schlomo
Copy link

schlomo commented Nov 28, 2019

Anybody from AWS care to comment on this feature request?

@mikestef9
Copy link
Contributor

mikestef9 commented Nov 28, 2019

With the release of Managed Nodes with CloudFormation support, EKS now automatically handles updating aws-auth config map for joining nodes to a cluster.

Does this satisfy the initial use case here, or is there a separate ask to manage adding users to the aws-auth config map via CloudFormation?

@inductor
Copy link

@mikestef9 I think that #554 can be one of the similar issues why you would like this kind of options

@ayosec
Copy link
Author

ayosec commented Nov 28, 2019

@mikestef9

Does this satisfy the initial use case here, or is there a separate ask to manage adding users to the aws-auth config map via CloudFormation?

My main use case is with Lambda functions.

The managed nodes feature is pretty cool, and very useful for new EKS clusters, but most of our modifications to the aws-auth ConfigMap are to add or remove roles for Lambda functions.

@tnh
Copy link

tnh commented Jan 8, 2020

@mikestef9 It would be useful to then allow people / roles to be able to then run kubectl commands.

Right now, we have a CI deploy role - but we want to allow other saml based users to be able to kubectl

We do a post cluster creation to kubectl

apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    - rolearn: {{ ClusterAdminRoleArn }}
      username: system:node:{{ '{{EC2PrivateDNSName}}' }}
      groups:
        - system:bootstrappers
        - system:nodes
    - rolearn: arn:aws:iam::{{ AccountId }}:role/{{ AdminRoleName }}
      username: admin
      groups:
        - system:masters
    - rolearn: arn:aws:iam::{{ AccountId }}:role/{{ CIRoleName }}
      username: ci
      groups:
        - system:masters
    - rolearn: arn:aws:iam::{{ AccountId }}:role/{{ ViewRoleName }}
      username: view

But I'd much rather have this configMAp created by me during the cluster creation

@nckturner
Copy link

nckturner commented Feb 19, 2020

@mikestef9 Some relevant issues related to EKS users debugging authentication problems (kubernetes-sigs/aws-iam-authenticator#174 and kubernetes-sigs/aws-iam-authenticator#275) that imo are data points in favor of API and Cloudformation management of auth mappings (and configurable admin role: #554).

@nemo83
Copy link

nemo83 commented Feb 25, 2020

This ^^, how can we get this implemented? Can anyone from AWS tell us if they gonna support this at CF template level? Or a workaround is needed at eksctl level?

@inductor
Copy link

@nemo83 AWS team tagged researching this issue, so that's the stage of this issue.

@nicorikken
Copy link

nicorikken commented Aug 18, 2020

I'm also looking into automating the updates to this configmap from cloudformation. Doing so via lambda seems doable.

My main concern with automation are race-conditions on the contents of the configmap when applying updates as the content has to be parsed. A strategic merge is not possible. If the configuration would be implemented in one or more (one per entry) CRD's it would be easier to apply a patch. In that case existing efforts on Kubernetes support for CloudFormation like kubernetes-resources-provider can be reused.

Update: we gave up on writing a lambda to update the configmap. The code became too complex and fragile. We now template it separately.

Update 2: I had a concern for automatically updating the configmap if it would become corrupt and thereby prevent API access. With the current workings of AWS (1 sept 2020) there is a way of recovering from an aws-auth configmap corruption:

aws-auth configmap recovery (tested 1 sept 2020)

The prerequisite is to have a pod in the cluster running with a serviceaccount that can update the aws-auth configmap. Ideally something that you can interact with, like k8s-dashboard or in our case ArgoCD.

Then if the aws-auth become corrupt you can hopefully still update the configmap that way.

If that is not the case because the nodes have lost their access we can use the EKS-managed Node Group to restore node access to the Kubernetes API. You can create an EKS-managed Node Group of just 1 node with the role that is also used by your cluster nodes. (Note: this is not recommended by AWS, but we abuse AWS's power to update the configmap on the managed master nodes.)

AWS will now add this role to the aws-auth configmap:

apiVersion: v1
kind: ConfigMap
metadata:
  annotations:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    ...
    # this entry is added by AWS automically
    - rolearn: < NodeInstanceRole ARN >
      username: system:node:{{EC2PrivateDNSName}}
      groups:
      - system:bootstrappers
      - system:nodes

Deletion of that Node Group will remove that entry (for which AWS warns you), so the serviceaccount access is required to ensure another method of cluster access, like via the kubectl CLI. Update to aws-auth configmap to get that method of access. Then the Node Group can be removed, which in turn removes the added aws-auth configmap entry that was automatically created earlier. Now the persistent connection (e.g. kubectl CLI) can be used to permanently fix the configmap to ensure the nodes have access.

Note: if a service is automatically but incorrectly updating the configmap it would be harder, of not impossible to recover. ⚠

@hellupline
Copy link

I would go a extra mile and ask AWS to create an API to manage aws-auth, with IAM action associated

in case I delete the IAM role/user associated with the cluster creation ( detail: this user/role is not visible after, you have to save this info outside the cluster, or tag the cluster with it. )

and if I dont add another admin to the cluster, I am now locked out of the cluster,

for me, this is a major issue, because I use federated auth, users ( and my day-to-day account ) are efemeral... my user can be recreated without warning with another name/ID,

the ideia is: can AWS add an IAM action like ESHttpGet/ESHttpPost ? ( example from ElasticSearch, because is a third party software )

@mikestef9
Copy link
Contributor

Hey @hellupline

We are actually working on exactly that right now, an EKS API to manage IAM users and their permissions to an EKS cluster. This will allow you to manage IAM users via IaC tools like CloudFormation

@yanivpaz
Copy link

@mikestef9 how it's going to be different compared to https://github.com/aws-quickstart/quickstart-amazon-eks/blob/main/templates/amazon-eks-controlplane.template.yaml#L109

@markussiebert
Copy link

markussiebert commented Dec 1, 2020

I wonder, why this isn't possible with eks clusters (but with selfhosted k8s clusters on AWS?)

https://github.com/kubernetes-sigs/aws-iam-authenticator#crd-alpha

Even looking at the cdk implementation of auth mapping, it would be simple to get rid of some limitations that exist right now (stack barrier, imported clusters ...)

So if something like CF Support for Auth-Mapping will be implemented (i support this) it would be good, if it won't conflict with the crd's I hope coming soon to eks.

@gp42
Copy link

gp42 commented Jan 1, 2021

Hey @hellupline

We are actually working on exactly that right now, an EKS API to manage IAM users and their permissions to an EKS cluster. This will allow you to manage IAM users via IaC tools like CloudFormation

Any news on this issue?

@bryantbiggs
Copy link
Member

Yes!

@seifrajhi
Copy link

great job team !!!

@mikestef9
Copy link
Contributor

EKS now supports a simplified method for managing IAM authentication and authorization.

@mikestef9
Copy link
Contributor

mikestef9 commented Dec 19, 2023

More details on the feature

Enable Access Management APIs

EKS access entry APIs require you to opt in before use. This can be done during cluster creation or updated on existing clusters. You must first ensure your cluster is on a supported platform version.

Supported platform versions

Kubernetes version EKS Platform version
1.29+ any
1.28 eks.6
1.27 eks.10
1.26 eks.11
1.25 eks.12
1.24 eks.15
1.23 eks.17
aws eks create-cluster \
   --name my-cluster \
   --role-arn arn:aws:iam::012345678910:role/myClusterRole \
   --resources-vpc-config subnetIds=subnet-6782e71e,subnet-e7e761ac
   --access-config authenticationMode=API

Existing cluster

aws eks update-cluster-config \
   --name my-cluster \
   --access-config authenticationMode=API

EKS supports the following three values for authenticationMode:

  • CONFIG_MAP: The cluster will source authenticated IAM principals only from the aws-auth ConfigMap (this currently the default value for newly created clusters).
  • API_AND_CONFIG_MAP: The cluster will source authenticated IAM principals from both EKS access entry APIs and the aws-auth ConfigMap, with the EKS access entry APIs taking priority.
  • API: The cluster will source authenticated IAM principals only from EKS access entry APIs.

In a to be determined future Kubernetes version, EKS will stop supporting the aws-auth ConfigMap as an authentication source, and you will be blocked from upgrading if you haven't switched your cluster authenticationMode to opt in to access entry APIs. Users are strongly encouraged to migrate to access entry APIs as soon as possible.

Note: The default selected option at launch for new clusters in the EKS Console is API_AND_CONFIG_MAP.

Switching authentication modes on an existing cluster is a one-way operation. You can switch from CONFIG_MAP to API_AND_CONFIG_MAP. You can then switch from API_AND_CONFIG_MAP to API. You cannot revert these operations in the opposite direction. Meaning you cannot switch back to CONFIG_MAP or API_AND_CONFIG_MAP from API. And you cannot switch back to CONFIG_MAP from API_AND_CONFIG_MAP.

eks:authenticationMode is a supported IAM condition key, so you can write IAM policies that enforce clusters be created with a certain authentication mode.

Managing/Modifying Cluster Admin

Control over which identity has cluster admin permissions when creating a cluster. The following command will create a cluster without any IAM identity having Kubernetes cluster admin access to the cluster.

aws eks create-cluster \
   --name my-cluster \
   --role-arn arn:aws:iam::012345678910:role/myClusterRole \
   --resources-vpc-config subnetIds=subnet-6782e71e,subnet-e7e761ac
   --access-config authenticationMode=API
   --no-bootstrap-cluster-creator-admin-permissions

Important : bootstrapClusterCreatorAdminPermissions can only be set to False if the cluster authenticationMode is set to either API or API_AND_CONFIG_MAP.

After cluster creation (or during, as you can create access entries while a cluster is in CREATING state), you can create a standard access entry with whatever desired IAM identity as a cluster admin.

For existing clusters with authenticationMode set to CONFIG_MAP, after being updated to an EKS platform version that supports access entry and changing authenticationMode to use access entry APIs, the original IAM identity that created the cluster will be returned as an existing access entry. You can choose to delete this access entry if desired.

aws eks list-access-entries --cluster-name my-existing-cluster

{
  "accessEntries": [
    "arn:aws:iam::012345678910:role/EKSClusterCreatorCICDRole",
  ]
}
aws eks delete-access-entry --cluster-name my-existing-cluster \
   --principal-arn arn:aws:iam::012345678910:role/EKSClusterCreatorCICDRole

GitHub issues resolved:

  • Add AdminRole option at cluster creation #554
  • Add an ability to view and update the IAM entity user or role that is automatically granted system:masters permissions in the cluster's RBAC configuration #922

If you don't set the admin bootstrap parameter (or explicitly set it to True), EKS will automatically create an access entry with Kubernetes cluster administrator permissions on your behalf.

Note: The value that you set for bootstrapClusterCreatorAdminPermissions on cluster creation is not returned in the response of subsequent EKS DescribeCluster API calls. This is because the value of that field post cluster creation may not be accurate. Further changes to access control post cluster creation will always be performed with access entry APIs. The ListAccessEntries API is the source of truth for cluster access post cluster creation.

Recover cluster admin access

Previously if you made a typo in the aws-auth ConfigMap you could lock yourself out of the cluster. Or you could lose access to cluster if you accidentally deleted cluster creator IAM identity and didn't remember the IAM identity to re-create it. Now, as long as an IAM identity has permission to create access entries, you can always recover Kubernetes API access to your cluster.

aws eks create-access-entry --cluster-name my-cluster  \
   --principal-arn arn:aws:iam::012345678910:role/Admin

 aws eks associate-access-policy --cluster-name my-cluster \
   --principal-arn arn:aws:iam::012345678910:role/Admin \
   --policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy \
   --access-scope '{"type": "cluster"}'

GitHub issues resolved:

  • Allow customer to rollback aws-auth configmap when update the configuration wrong (Solving the goal of the request) #1209

View Kubernetes resources from EKS Console

A common challenge is EKS Console users (who didn't create the cluster and have Kubernetes cluster admin permissions by default) struggling to figure out why they can't view Kubernetes resources in the Console. Previously, a separate IAM principal with cluster access would have to add an additional aws-auth ConfigMap entry for that Console user's IAM principal. Now, a pop up will appear for IAM principals using the EKS Console that takes them to the create access entry screen with their IAM principal pre-popluated. As the long as the Console user has IAM permissions to call EKS access entry APIs, they can give themselves Kubernetes permissions from the Console itself without needing to create aws-auth ConfigMap entries anymore.

Note: The Console user's IAM identity still first requires the permission eks:AccessKubernetesApi to view resources in the console, irrespective of if they have Kubernetes level permissions granted through access entry APIs.

GitHub issues resolved:

  • Manage aws-auth ConfigMap in AWS Console (access entries don't actually interact with aws-auth ConfigMap, but this solves the goal of the request) #1278

Authentication

The following examples show to authenticate IAM principals to an EKS cluster, as the preferred alternative to the aws-auth ConfigMap. If no EKS access policies are attached to the principal (as shown in authorization examples below), no Kubernetes permissions are granted, but the principal will be able to successfully authenticate to the cluster, and more fine-grained authorization can be implemented using RBAC.

Username is not a required input when creating an access entry. If not specified, the username value passed to Kubernetes for authentication decisions will map to the value returned from the IAM STS get-caller-identity API. For IAM users, this is the original IAM user ARN. For IAM roles, this is the assumed role ARN with session name, of the format arn:aws:sts::012345678910:assumed-role/<role-name-without-path>/{{SessionName}}. This value can change depending on the session name, so for use with RBAC, you will want to set Kubernetes groups on the access entry to reference in Kubernetes role bindings.

We recommend leaving custom username blank, as the generated username includes the session name, which will appear in CloudTrail logs as an added security benefit. If using a custom username with IAM roles, it is strongly encouraged to set the {{SessionName}} macro. You can specify the following macros in custom usernames {{AccessKeyID}}, {{AccountID}}, {{SessionName}}. Custom usernames can't start with system:, eks:, aws:, amazon: , or iam:.

Note: IAM principals referenced in an access entry must already exist in IAM.

Authenticate an IAM user to a cluster

aws eks create-access-entry --cluster-name my-cluster \
   --principal-arn arn:aws:iam::012345678910:user/MyUser

Note: The Kubernetes username used for further authorization RBAC in this case will be the same as the input principal ARN arn:aws:iam::012345678910:user/MyUser.

Note: If you change your IAM username, that user will no longer have access to the cluster. You will need to recreate an access entry with the updated IAM user ARN.

Note: IAM best practices recommends using roles with temporary credentials, rather than users with long-term credentials.

Authenticate an IAM role to a cluster and specify Kubernetes groups

aws eks create-access-entry --cluster-name my-cluster \
   --principal-arn arn:aws:iam::012345678910:role/MyRole \
   --kubernetes-groups dev test

Note: The Kubernetes username generated in this case will be arn:aws:sts::012345678910:assumed-role/MyRole/<session-name>.

Authenticate an IAM role to a cluster with a custom username

aws eks create-access-entry --cluster-name my-cluster \
   --principal-arn arn:aws:iam::012345678910:role/MyRole \
   --username my-username

Note: No groups are specified here, because the static username of my-username can be referenced in Kubernetes RBAC policies. However, this is not recommended, as the assumed role session name will be lost from Kubernetes audit logs.

Authenticate an IAM role to a cluster with a custom username using macros

aws eks create-access-entry --cluster-name my-cluster \
   --principal-arn arn:aws:iam::012345678910:role/MyRole \
   --username my-username:{{AccountID}}:{{SessionName}}
   --kubernetes-groups dev test

Note: The macros will be expanded when username reaches Kubernetes for further RBAC authorization decision.

Note: As illustrated in this example, when using the {{SessionName}} macro, a : must be part of the custom username at some point before the macro (does not have to directly precede as in example above).

Update the username and/or groups of an existing access entry

aws eks update-access-entry --cluster-name my-cluster \
   --principal-arn arn:aws:iam::012345678910:role/MyRole \
   --username my-other-username
   --kubernetes-groups my-other-group

Note: Be careful performing this operation if you already have existing RBAC resources that reference previous username/groups. RBAC will also have to be updated to support the new username/groups.

Note: Groups are declarative. Enter all desired groups with the update call.

Creating an access entry with a role ARN containing a path

Authentication decision for EKS access entries are matched against the IAM principal ID of an IAM identity, rather than a string match against input value of the IAM identity and returned assumed role response from sts get-caller-identity (as is the case with entries in the aws-auth ConfigMap).

When creating an EKS access entry, enter the fully qualified IAM role ARN like below (do not strip off the path like you need to do in the aws-auth ConfigMap today)

aws eks create-access-entry --cluster-name my-cluster \
   --principal-arn arn:aws:iam::012345678910:role/teams/tooling/devToolingOperator

Note: The username in returned access entry will not include the path: arn:aws:sts::012345678910:assumed-role/devToolingOperator/{{SessionName}}.

Authenticate an IAM principal from a separate account

From account 012345678910 run the following

aws eks create-access-entry --cluster-name my-cluster \
   --principal-arn arn:aws:iam::567890123456:role/myOtherAccountRole

An identity assuming the myOtherAccountRole role in account 567890123456 is now able to authenticate to the my-cluster cluster in account 012345678910.

Eventual Consistency

Similar to AWS IAM behavior, EKS access entries are eventually consistent, and may take several seconds to be effective after the initial API call returns successfully. You must design your applications to account for these potential delays. We recommend that you do not include access entry create/updates in the critical, high-availability code paths of your application. Instead, make changes in a separate initialization or setup routine that you run less frequently. Also, be sure to verify that the changes have been propagated before production workflows depend on them.

GitHub issues resolved:

  • Manage IAM identity cluster access with EKS API #185
  • Federated roles containing paths don't work properly with EKS #573
  • Privilege Escalation using aws-auth configmap #795
  • Add aws-iam-authenticator as a CRD (again, this feature does not interact with the aws-auth config map, but solves the goal of the request) #550

Authorization

The Kubernetes API server supports an ordered list of authorization modules, allowing modules to approve, deny, or pass on a particular action. EKS clusters now include an additional authorizer separate from RBAC, and you can optionally attach EKS defined access policies to give Kubernetes authorization permissions to the IAM principal authenticated with an access entry. Authorization can be done on a namespace or cluster scope. You can specify up to 25 namespace selectors, with optional wildcard suffix matching, to restrict that principals access to specific namespaces.

In EKS clusters, RBAC policies are currently evaluated prior to EKS access policies, so any Kubernetes Role Based Access Control (RBAC) ClusterRoleBinding or RoleBinding that specifies an IAM principal will be evaluated for an allow authorization decision before EKS Access Policies. This may change in the future, although in practice the order today does not matter, because neither RBAC nor EKS access policies support deny actions.

EKS predefines several managed access policies that mirror the default Kubernetes user facing roles, including cluster-admin, admin, edit, and, view. To view the list of all policies you can run the following command

aws eks list-access-policies

The examples below assume you have already created the access entry to authenticate the IAM principal to the cluster.

Give read only permissions for an IAM principal to an entire cluster

aws eks associate-access-policy --cluster-name my-cluster \
   --principal-arn arn:aws:iam::012345678910:role/MyRole \
   --policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSViewPolicy
   --access-scope '{"type": "cluster"}'

Give editor permissions for an IAM principal to specific namespaces

aws eks associate-access-policy --cluster-name my-cluster \
   --principal-arn arn:aws:iam::012345678910:role/MyRole2 \
   --policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSEditPolicy
   --access-scope '{"type": "namespace", "namespaces": ["app-a", "app-b"]}'

Give admin permissions to an IAM identity to access multiple namespaces with a wildcard selector

aws eks associate-access-policy --cluster-name my-cluster \
   --principal-arn arn:aws:iam::012345678910:role/MyRole3 \
   --policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSAdminPolicy
   --access-scope '{"type": "namespace", "namespaces": ["dev-*"]}'

Update list of allowed namespaces for an existing access entry

aws eks associate-access-policy --cluster-name my-cluster \
   --principal-arn arn:aws:iam::012345678910:role/MyRole2 \
   --policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSEditPolicy
   --access-scope '{"type": "namespace", "namespaces": ["app-a", "app-c"]}'

Node Authentication and Authorization

Access entries have an optional input type field for use with authorizing worker node roles to join a cluster. The following types are supported by the access entry API:

  • EC2_LINUX
  • EC2_WINDOWS
  • FARGATE_LINUX
  • STANDARD

STANDARD is the default type and will be set as the default if not passed on access entry creation. This is the default experience for non-node role access entries. Similar to with the aws-auth ConfigMap today, there is no need to manually create access entries for IAM roles used with managed node groups and Fargate. MNG and Fargate will continue to manage node entries on your behalf, irrespective of authenticationMode. When you switch an existing cluster authenticationMode from CONFIG_MAP to API_AND_CONFIG_MAP entries in the aws-auth ConfigMap managed by MNG and Fargate will be automatically migrated to node type access entries. Once your cluster is migrated to an access entry supporting authentication mode, MNG and Fargate will only manage entries using access entry APIs, not the aws-auth ConfigMap.

Note: aws-auth ConfigMap entries not associated with MNG or Fargate are not automatically migrated when switching authentication modes. These entries must be migrated manually.

Note: In practice, you will never need to manually use the FARGATE_LINUX type when creating an access entry, because that entry would always be created on your behalf by the EKS/Fargate service. If you do pre-create the access entry, EKS/Fargate will simply skip trying to create the entry itself.

The examples below would be required if using self-managed node groups or Karpenter.

aws eks create-access-entry --cluster-name my-cluster  \
   --principal-arn arn:aws:iam::012345678910:role/MyLinuxNodeRole \
   --type EC2_LINUX

aws eks create-access-entry --cluster-name my-cluster  \
   --principal-arn arn:aws:iam::012345678910:role/MyWindowsNodeRole \
   --type EC2_WINDOWS

Note: It is required to use separate IAM roles for the EC2_LINUX, EC2_WINDOWS, and FARGATE_LINUX types.

Note: You cannot attach EKS access policies to access entries with type of EC2_LINUX, EC2_WINDOWS, or FARGATE_LINUX. EKS automatically handles the permissions required for worker nodes with those access entries to join the cluster.

Note: Cross account entries are not permitted for node type access entries.

Note: The IAM identity creating node type access entries of either EC2_LINUX or EC2_WINDOWS must have iam:PassRole permission. This permission will be validated by EKS, and a failure will be returned if not present.

GitHub issues resolved:

  • separate aws-auth for worker node and others #822
  • add parameter at cluster creation for worker IAM role #727
  • Managed Node Groups using role with extra path do not join cluster #926

Migration fully off the aws-auth ConfigMap

The aws-auth ConfigMap can still be used until the to be announced future Kubernetes version where it is no longer supported. But migration to access entry API is strongly encouraged.

Migrate an entry in the aws-auth ConfigMap to access entry API

For an existing cluster, change your cluster authenticationMode from CONFIG_MAP to API_AND_CONFIG_MAP. Before deleting anything in the aws-auth ConfigMap, create equivalent access entries. Specify the same username and/or groups already used the aws-auth ConfigMap if necessary. After access entry creation, then delete the corresponding entry from the ConfigMap and validate your environment. Once you have migrated everything to access entry APIs, you can switch your cluster authenticationMode to API, to turn off the aws-auth ConfigMap as a source for authenticated IAM principals.

AWS Service Integrations

The access entries feature simplifies how other AWS services can obtain Kubernetes permissions needed to perform actions against EKS clusters. In the future, AWS services with EKS integrations (including EMR, Batch, Resilience Hub) will update their Service Linked Roles to contain permissions to create access entries. When you take an action in that service, the service SLR will automatically create an access entry and associate a specific EKS access policy for that service. There is no need for users to manually create access entries and associated permissions. For example, when Resilience Hub migrates to access entry, all of the steps currently outlined in this doc page will no longer be required. You can think of these "service-linked access entries" as the Kubernetes permission equivalent of AWS service-linked-roles.

Example policy a future service might add to its SLR to give itself permissions to create a specific access entry with only specific associated access policies.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "eks:CreateAccessEntry",
      "Resource": "arn:aws:eks:*:*:cluster/*",
      "Condition": {
        "ArnEquals": {
          "eks:principalArn": "${aws:principalArn}"
        },
        "StringEquals": {
          "aws:RequestTag/aws:eks:managed": "${aws:principalArn}"
        },
        "StringEqualsIfExists": {
           "eks:username": ""
        },
        "ForAllValues:StringEqualsIfExists": {
           "eks:kubernetesGroups": [ "" ]
        }
        }
      }
    },
    {
      "Effect": "Allow",
      "Action": "eks:DeleteAccessEntry",
      "Resource": "arn:aws:eks:*:*:access-entry/*/role/*",
      "Condition": {
        "ArnEquals": {
          "eks:principalArn": "${aws:principalArn}"
        },
        "StringEquals": {
          "aws:ResourceTag/aws:eks:managed": "${aws:principalArn}"
      }
    },
    {
      "Effect": "Allow",
      "Action": "eks:AssociateAccessPolicy",
      "Resource": "arn:aws:eks:*:*:access-entry/*/role/*",
      "Condition": {
        "ArnEquals": {
          "eks:principalArn": "${aws:principalArn}"
        },
        "StringEquals": {
          "eks:policyArn": "arn:aws:eks::aws:cluster-access-policy/AmazonEKSNodeOperatorClusterPolicy",
          "eks:accessScope": "cluster",
          "aws:ResourceTag/aws:eks:managed": "${aws:principalArn}"
        }
      }
    },
    {
      "Effect": "Allow",
      "Action": "eks:AssociateAccessPolicy",
      "Resource": "arn:aws:eks:*:*:access-entry/*/role/*",
      "Condition": {
        "ArnEquals": {
          "eks:principalArn": "${aws:principalArn}"
        },
        "StringEquals": {
          "eks:policyArn": "arn:aws:eks::aws:cluster-access-policy/AmazonEKSNodeOperatorPolicy",
          "eks:accessScope": "namespace",
          "aws:ResourceTag/aws:eks:managed": "${aws:principalArn}"
        },
        "ForAllValues:StringEquals": {
          "eks:namespaces": [
            "my-namepsace" 
          ]
        }
      }
    }
  ]
}

You can use this example as a reference to write your own IAM policies that scope down what type of access entries IAM principals can create.

IAM policy control for access entries
EKS access entry APIs support condition context keys to enable fine grained control over creation/management of access entries.

EKS supports the following condition keys associated with this feature

  • eks:principalArn
  • eks:username
  • eks:kubernetesGroups
  • eks:accessScope
  • eks:namespaces
  • eks:policyArn
  • eks:authenticationMode

Deletion

Deleting an access entry will remove any associated access policies. There is no need to first disassociate access policies.

aws eks delete-access-entry --cluster-name my-cluster \
   --principal-arn arn:aws:iam::012345678910:role/MyRole

Deleting a cluster will also automatically delete any access entries currently associated with that cluster.

SSO/AWS IAM Identity Center

There is no direct change to improve this user experience. You still need to add each auto generated role as a separate access entry. But automating that process on your own should be become easier since everything is now an AWS API. We plan to improve this UX in the future by allowing users to directly associate IAM Identity Center permission sets to EKS clusters. Follow this GitHub issue for details: EKS authentication rolearn wildcard support #474.

@ab77
Copy link

ab77 commented Dec 26, 2023

There may be a bug in the CloudFormation implementation of this feature, it seems that Acess Policies are lost on STACK_UPDATE (they are present after the initial create though). In the template, resources are defined as follows:

  AccessEntryAdmin:
    Type: AWS::EKS::AccessEntry
    Properties:
      ClusterName:
        Ref: Cluster
      PrincipalArn:
        Fn::Sub: arn:${AWS::Partition}:iam::${AWS::AccountId}:role/admin
      KubernetesGroups:
      - cluster-admin
      AccessPolicies:
      - PolicyArn:
          Fn::Sub: arn:${AWS::Partition}:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy
        AccessScope:
          Type: cluster

.. after update:

image

The only access policy that isn't wiped, is the one attached to the creator IAM role.

@lynnnnnnluo
Copy link

Hello, thanks for your feedback ! I attempted to reproduce the issue in the following way but I wasn't able to reproduce
I created Access entry with the template you have provided, then I triggered a stack update by updating the stack tags. Checking EKS console, the access policies are still there.

Could you describe more on the CFN template before/after the stack update so I can try reproducing ?

@ab77
Copy link

ab77 commented Dec 27, 2023

Could you describe more on the CFN template before/after the stack update so I can try reproducing ?

Hello, I'll try to reproduce over here again (this is my cluster definition for now):

  Cluster:
    Type: AWS::EKS::Cluster
    Properties:
      Name: !Sub "${AWS::StackName}"
      RoleArn: !GetAtt Role.Arn
      AccessConfig:
        AuthenticationMode: API
        BootstrapClusterCreatorAdminPermissions: true
      ResourcesVpcConfig:
        EndpointPrivateAccess: true
        EndpointPublicAccess: true
        SecurityGroupIds:
          - Fn::ImportValue: !Sub "${ParentClientStack}-ClientSecurityGroup"
        SubnetIds:
          - Fn::ImportValue: !Sub "${ParentVPCStack}-SubnetAPrivate"
          - Fn::ImportValue: !Sub "${ParentVPCStack}-SubnetBPrivate"
      Version: !Ref Version
      EncryptionConfig:
        - Provider:
            KeyArn:
              Fn::ImportValue: !Sub "${ParentKmsKeyStack}-KeyArn"
          Resources:
            - secrets
      KubernetesNetworkConfig:
        IpFamily: !Ref IpFamily

.. also I looked through Cloudtrail logs and there are DisassociateAccessPolicy events from CFN, so it's definitely triggering disassociations for some reason on updates:

    "eventTime": "2023-12-26T20:36:16Z",
    "eventSource": "eks.amazonaws.com",
    "eventName": "DisassociateAccessPolicy",
    "awsRegion": "us-east-1",
    "sourceIPAddress": "cloudformation.amazonaws.com",
    "userAgent": "cloudformation.amazonaws.com",
    "requestParameters": {
        "policyArn": "arn%3Aaws%3Aeks%3A%3Aaws%3Acluster-access-policy%2FAmazonEKSClusterAdminPolicy",
        "name": "foo-bar-eks-1",
        "principalArn": "arn%3Aaws%3Aiam%3A%3A1234567890%3Arole%2Fadmin"
    },

@lynnnnnnluo
Copy link

Thanks for your reply! It turns out there is a code bug in CFN update, we will be working on the fix as soon as possible.

@lynnnnnnluo
Copy link

Hello, the fix to above issue was shipped, thanks again for the feedback.

@joebowbeer
Copy link
Contributor

@mikestef9 Do both the EKS User Guide and EKS Best Practices Guide need updating now that the Cluster Access Manager API has been added and is the preferred way to manage access of AWS IAM principals to Amazon EKS clusters?

aws/aws-eks-best-practices#463

@Nuru
Copy link

Nuru commented Feb 22, 2024

@mikestef9 Thank you very much for the detailed documentation on this set of features.

There seems to be a feature missing from the CLI, and I don't know where to report it. As you said, you can associate an IAM principal with Kubernetes roles, without associating an access policy:

aws eks create-access-entry --cluster-name my-cluster \
   --principal-arn arn:aws:iam::012345678910:role/MyRole \
   --kubernetes-groups dev test

However, I cannot find a way to retrieve the list of Kubernetes groups associated with a principal via the CLI. I would expect this to be either get-access-entry or an optional additional output ot list-access-entries (or both), but no such luck.

Update

🤦 Thank you @bryantbiggs for telling me the command I want is there, I just didn't see it:

aws eks describe-access-entry \
   --cluster-name my-cluster \
   --principal-arn arn:aws:iam::012345678910:role/MyRole

@bryantbiggs
Copy link
Member

describe-access-entry contains the kubernetesGroups property

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
EKS Amazon Elastic Kubernetes Service Proposed Community submitted issue
Projects
Status: Shipped
Development

No branches or pull requests