Introduction
Since the initial Amazon Elastic Kubernetes Service (Amazon EKS) launch, it has supported AWS Identity and Access Management (AWS IAM) principals as entities that can authenticate against a cluster. This was done to remove the burden—from administrators—of having to maintain a separate identity provider. Using AWS IAM also allows AWS customers to use their AWS IAM knowledge and experience and enables administrators to use AWS IAM security features, such as AWS CloudTrail audit logging and multi-factor authentication.
Until now, administrators used Amazon EKS APIs to create clusters, then switched to the Kubernetes API to manage mappings of AWS IAM principals and their Kubernetes permissions. This manual and multi-step process complicated the way users were granted access to Amazon EKS clusters. It prevented administrators from revoking cluster-admin [root-like] permissions from the principal that was used to create the cluster. The need to make calls to different APIs (AWS and Kubernetes) to manage access also increased the likelihood of misconfiguration.
Feature Overview
The Amazon EKS team has improved the cluster authentication (AuthN) and authorization (AuthZ) user experience with improved cluster access management controls. As of the date of this post, cluster administrators can now grant AWS IAM principals access to all supported versions (v1.23 and beyond) of Amazon EKS clusters and Kubernetes objects directly through Amazon EKS APIs. This new functionality relies on two new concepts: access entries and access policies. An access entry is a cluster identity—directly linked to an AWS IAM principal user or role—that is used to authenticate to an Amazon EKS cluster. An Amazon EKS access policy authorizes an access entry to perform specific cluster actions.
Cluster access management API
The new cluster access management API objects and commands allow administrators to define access management configurations—including during cluster creation—using familiar infrastructure as code (IaC) tools such as AWS CloudFormation, Terraform, or the AWS Cloud Development Kit (CDK).
The improved customer access management controls enable administrators to completely remove or refine the permissions automatically granted to the AWS IAM principal used to create the cluster. If a misconfiguration occurs, then cluster access can be restored simply by calling an Amazon EKS API, as long as the caller has the necessary permissions. The aim on these new controls is to reduce the overhead associated with granting users and applications access to clusters and objects within those clusters.
Note: We have always recommended that AWS IAM roles be used as principals to create Amazon EKS clusters. Roles provide a layer-of-indirection that decouples users from permissions. Users can be removed from roles, without having to adjust AWS IAM policies that provide permissions to the cluster creator roles.
Kubernetes authorizers
Access policies are Amazon EKS-specific policies that assign Kubernetes permissions to access entries. At launch, Amazon EKS supports only predefined and AWS managed policies. Access policies are not AWS IAM entities and are defined and managed by Amazon EKS.
In Kubernetes, different AuthZ services—known as authorizers—are chained together in a sequence to make AuthZ decisions about inbound API server requests. This allows custom AuthZ services to be used with the Kubernetes API server. The new feature allows you to use upstream RBAC (Role-based access control) in combination with access policies. Both the upstream RBAC and Amazon EKS authorizer support allow and pass (but not deny) on AuthZ decisions. When creating an access entry with Kubernetes usernames or groups, the upstream RBAC evaluates and immediately returns a AuthZ decision upon an allow outcome. If the RBAC authorizer can’t determine the outcome, then it passes the decision to the Amazon EKS authorizer. If both authorizers pass, then a deny decision is returned.
Walkthrough
Getting started
Cluster access management using the access entry API is an opt-in feature for Amazon EKS v1.23 and either new or existing clusters. By default, Amazon EKS uses the latest Amazon EKS platform version when you create a new cluster. Amazon EKS automatically upgrades all existing clusters to the latest Amazon EKS platform version for their corresponding Kubernetes minor version. You can use new cluster access management controls when automatic upgrades of platform versions are rolled out on existing clusters. Or, you can update your cluster to the next supported Kubernetes minor version to take advantage of this feature.
To get started with this feature, cluster administrators create Amazon EKS access entries with the desired AWS IAM principals. Please see IAM policy control for access entries to configure AWS IAM permission for administrators. After these access entries are created, administrators can grant access to those entries by assigning access policies. Amazon EKS access policies include permission sets that support common use cases of administration, editing, or read-only access to Kubernetes resources.
The following command and output provide an up-to-date list of supported access policies for managing cluster access:
The following Amazon EKS access policies are based on these user-facing roles published in the Kubernetes documentation:
- AmazonEKSClusterAdminPolicy – cluster-admin
- AmazonEKSAdminPolicy – admin
- AmazonEKSEditPolicy – edit
- AmazonEKSViewPolicy – view
With the cluster access management controls, only AWS IAM principals with the appropriate permissions can authorize other AWS IAM principals to access Amazon EKS clusters. Permission is granted by creating access entries and associating access policies with those access entries. Be aware that access granted to AWS IAM principals by the Amazon EKS access policies are separate from permissions defined by any AWS IAM policy associated with the AWS IAM principal.
In short, only the AWS IAM principal and the applied Amazon EKS access entry policies are used by the cluster access management authorizer. The following diagram illustrates the workflow:
In the next sections, we’ll explore several use cases that are now possible via the new Amazon EKS cluster access management APIs.
Create or update a cluster to use access management API
With the introduction of this feature, Amazon EKS supports three modes of authentication: CONFIG_MAP, API_AND_CONFIG_MAP, and API. You can enable cluster to use access entry APIs by using authenticationMode API or API_AND_CONFIG_MAP. Use authenticationMode CONFIG_MAP to continue using aws-auth configMap exclusively. When API_AND_CONFIG_MAP is enabled, the cluster will source authenticated AWS IAM principals from both Amazon EKS access entry APIs and the aws-auth configMap, with priority given to the access entry API.
Amazon EKS cluster access management is now the preferred means to manage access of AWS IAM principals to Amazon EKS clusters. While we made access management easier and more secure, we did so without disrupting cluster operations or current configurations. With this approach you can explore cluster access management for your needs, and plan subsequent migrations to cluster access management when it best fits your schedule.
Amazon EKS suggests updating existing clusters to use authenticationMode API_AND_CONFIG_MAP and creating equivalent access entries by specifying the same identity and/or groups used in aws-auth configMap. When API_AND_CONFIG_MAP is enabled, the cluster will source authenticated AWS IAM principals from Amazon EKS access entry APIs and the aws-auth configMap, with the access entry API taking precedence. When you set authenticationMode to API_AND_CONFIG_MAP, for authentication, an access entry is evaluated prior to a configuration map along with any associated username and groups. When no access entry is created for the principal, the ConfigMap is inspected for the presence of a principal and its associated username and groups.
You can update existing cluster configuration to enable API authenticationMode. Make sure platform version is updated before you run update-cluster-config command. For existing clusters using CONFIG_MAP you’ll have to first update the authenticationMode to API_AND_CONFIG_MAP and then to API.
Switching authentication modes on an existing cluster is a one-way operation. You can switch from CONFIG_MAP to API_AND_CONFIG_MAP. You can then switch from API_AND_CONFIG_MAP to API. You cannot revert these operations in the opposite direction. Meaning you cannot switch back to CONFIG_MAP or API_AND_CONFIG_MAP from API. And you cannot switch back to CONFIG_MAP from API_AND_CONFIG_MAP.
Removing the default cluster administrator
Until now, when an Amazon EKS cluster was created, the principal used to provision the cluster was permanently granted Kubernetes cluster-admin privileges. From this scenario emerged the best practice of using an AWS IAM role to create Amazon EKS clusters. Using an AWS IAM role provided a layer-of-indirection to control who could assume the role using AWS IAM. By removing the ability to assume the role or removing the role all together, you could revoke a user’s access to the cluster.
As of the date of this post, clusters can be created with the AWS IAM principal of your choosing or with no permissions at all. The example below uses the bootstrapClusterCreatorAdminPermissions=false flag for access-config to prevent the principal—used to create the cluster—from being granted cluster administrator access.
To verify that no access entries exist for the cluster, the following AWS CLI command can be used to list existing cluster access entries:
If we try to use the AWS IAM principal with the kubectl auth can-i –list command we see that the principal—even with a properly configured kube config file—is not authenticated to the cluster:
To remove the cluster creator administrator role from an existing cluster, execute the following command on the associated access entry, which will appear once you’ve updated your cluster to an authentication mode that supports access entry.
Adding cluster administrators to existing clusters
Now that we’ve seen how to handle a cluster administrator during cluster creation, we’ll explore how to add cluster administrators to existing clusters. The following AWS CLI commands can be used to perform the following tasks:
- Create a cluster access entry to be granted cluster administrator access
- Associate the cluster administrator access policy to the aforementioned cluster access entry
With the access entry created and tied to an AWS IAM principal, the AmazonEKSClusterAdminPolicy is assigned by running the following AWS CLI command. Since we are creating a cluster administrator entry, we set the –access-scope type=cluster argument in the command:
Adding namespace administrators
Namespace administrators have administrator permissions that are scoped to specific namespaces. They aren’t able to create cluster-scoped resources, like namespaces. To illustrate this use case, we’ll create an access entry based on a read-only AWS IAM role. This read-only role has read-only access to the underlying AWS account. While this example may seem contrived, it illustrates the difference between AWS IAM policies and Amazon EKS cluster access policies. For reference, the ReadOnly role has one attached AWS IAM policy—arn:aws:iam::aws:policy/ReadOnlyAccess—that gives the role read-only access to the underlying AWS account.
The command above created a cluster access entry underpinned by the aforementioned ReadOnly AWS IAM role. Next, we’ll associate the AmazonEKSAdminPolicy to the newly-created access entry.
After executing this command, the AWS IAM ReadOnly role—that only has read-only access to the underlying AWS account—now has namespace-administrator access to the test* namespaces.
Adding readonly access users
To get started with this use case, we use the existing AWS IAM read-only role that we used in the preceding use case. To do that, we need to disassociate the existing access policy. The following commands remove the access policy from the namespace admin access entry, and then list any policies associated with the access entry.
As you can see, with the preceding commands we disassociated the access policy that granted our AWS IAM read-only role namespace admin access to the cluster. With the following commands we’ll associate the AmazonEKSViewPolicy to the access entry to provide cluster-wide read-only access to the AWS IAM role principal, and then verify that the access entry has readonly access across the cluster.
Using cluster access entries with Kubernetes Role Base Access Control (RBAC)
As previously mentioned, the cluster access management controls and associated APIs don’t replace the existing RBAC authorizer in Amazon EKS. Rather, Amazon EKS access entries can be combined with the RBAC authorizer to grant cluster access to an AWS IAM principal while relying on Kubernetes RBAC to apply desired permissions.
For example, the following Amazon EKS API command creates a cluster access entry and subsequently adds a Kubernetes group to that entry. The kubectl apply command applies a cluster role binding resource which binds the Kubernetes group to the cluster-admin cluster role resource. The result is a cluster access entry with permissions granted using Kubernetes RBAC.
You can use kubectl auth can-i –list command to verify that the cluster access entry has cluster administrator permissions and can perform all actions on all Kubernetes resources.
Deleting the AWS IAM principal from under the access entry
The reference of a cluster access entry to its underlying AWS IAM principal is unique, as seen in the accessEntryArn in the following create-access-entry output snippet:
Once an access entry is created, the underlying AWS IAM principal cannot be changed, while keeping the cluster access. The access entry and associated access policies must be recreated. In the following scenario, the following setup steps were completed:
- An AWS IAM role called ekstest was created
- A cluster access entry was created using the ekstest role
- The cluster access AmazonEKSViewPolicy was associated with the access entry underpinned by the ekstest AWS IAM role
After setup, the access was verified:
The kubectl whoami plugin indicates the currently authenticated Kubernetes cluster principal.
Next, the ekstest AWS IAM role was deleted, recreated, and reused to authenticate to the Amazon EKS cluster. The following commands show that while the ekstest AWS IAM role successfully authenticated to the Amazon EKS cluster, the access entry no longer authorizes the new ekstest role instance:
The new ekstest role may look the same, with the same ARN, but the RoleId—returned by the following aws iam get-role command—is different. This RoleId—UserId in the case of a user principal—is used by the cluster access entry datastore to link the access entry to the AWS IAM role or user principal.
Note: Due to the separation of Amazon EKS and AWS IAM command line interface (CLI) permissions, the Amazon EKS API doesn’t expose the AWS IAM principal identifiers—RoleId or UserId—that is used to reference AWS IAM principals.
To prevent non-deterministic behavior and avoid incorrect security settings, the best practice for changing or recreating the underlying AWS IAM principal is to first delete the access entry from the specific Amazon EKS cluster via the delete-access-entry command. Then when the AWS IAM principal is deleted and recreated, the access entry can be recreated, and the required access policies can be associated.
Conclusion
Amazon EKS cluster access management is now the preferred means to manage access of AWS IAM principals to Amazon EKS clusters. With cluster access management you can continue to leverage principals maintained by AWS IAM, as Amazon EKS access entries, and apply Kubernetes permissions with cluster access policies. Cluster access management uses standard API approaches to extend the Kubernetes AuthZ model with Amazon EKS authorizers. Together, the cluster access management rich feature-set provides AWS IAM integration without disruption to existing Kubernetes security schemes currently used in Amazon EKS. Your Kubernetes RBAC schemes will still work, but you no longer have to edit the aws-auth configMap.
With cluster access management you can also remove the cluster creator from newly-created clusters without losing access to the cluster. This feature provides better DevSecOps practices through automation, and least-privileged and time-based access.
While using cluster access management does allow for cleaner integration to AWS IAM principals for AuthN, AuthZ permissions are separate from AWS IAM and are modeled after well-known Kubernetes permissions. This means that while you can use AWS IAM to manage your AuthN principals, your Amazon EKS permissions are separate from your AWS IAM permissions. The result is a more flexible AuthZ model where AWS IAM permissions do not impact Amazon EKS cluster permissions.
Finally, cluster access management allows Amazon EKS administrators to use the Amazon EKS API for cluster access management without having to switch to the local Kubernetes API to perform the last-mile AuthZ settings for cluster user permissions. This is also a better approach for automated processes—DevSecOps pipelines—that build and update Amazon EKS clusters.
Try cluster access management!
If you are looking for a replacement for means to avoid the aws-auth configMap, while using standard Kubernetes AuthZ approaches then you should try cluster access management. You can run both models in tandem, with a cutover based on your needs and schedule, for the least disruption to your Amazon EKS operations.
In a future to be determined Kubernetes version of Amazon EKS, the aws-auth configMap will be removed as a supported authentication source, so migrating to access entries is strongly encouraged.
Check out our Containers Roadmap!
If you have ideas about how we can improve Amazon EKS and our other container services, then please use our Containers Roadmap and give us feedback and review our existing roadmap items.