A deep dive into Amazon EKS Hybrid Nodes

1 week ago 12
News Banner

Looking for an Interim or Fractional CTO to support your business?

Read more

This blog was authored by Chris Splinter, Principal Product Manager, AWS Kubernetes, Elamaran Shanmugam, Sr. Container Specialist Solutions Architect, AWS, Re Alvarez Parmar, Containers Specialist Solutions Architect, AWS.

We are excited to announce the general availability of a new feature for Amazon Elastic Kubernetes Service (Amazon EKS) that we launched at re:Invent 2024 called Amazon EKS Hybrid Nodes. With EKS Hybrid Nodes, users can use their existing on-premises and edge infrastructure as nodes in Amazon EKS clusters, creating a unified Kubernetes management experience across cloud, on-premises, and edge environments. This can be used for a variety of use cases, such as modernization, machine learning (ML), media streaming, and manufacturing workloads.

Users who are using Kubernetes in AWS Cloud for new and modernized applications frequently want to extend these capabilities to manage applications running in on-premises and edge environments for low latency, data dependency, data sovereignty, regulatory, or policy reasons. Historically, users looking to run Kubernetes in on-premises data center or edge environments were forced to run and operate open source Kubernetes or similar self-managed Kubernetes solutions. Self-managing Kubernetes on-premises is complex and adds operational overhead, ultimately slowing down innovation and modernization plans.

EKS Hybrid Nodes removes that complexity and overhead by allowing users to connect their existing on-premises and edge capacity as nodes to a managed Amazon EKS control plane in the cloud. This streamlines running Kubernetes on-premises and enables a consistent on-premises operational experience using the same EKS clusters, features, integrations, and tools that users are used to for running workloads in the cloud.

Overview

To use EKS Hybrid Nodes, you need connectivity between your on-premises network and the Amazon Virtual Private Cloud (Amazon VPC) you are using for your EKS cluster. You can use AWS Direct Connect, AWS Site-to-Site VPN, or your own VPN solution to create a private connection between your EKS cluster and hybrid nodes. EKS Hybrid Nodes reuses the existing mechanism in Amazon EKS for control plane to worker node communication. Therefore, you can have nodes running on Amazon Elastic Compute Cloud (Amazon EC2) instances in the AWS Region and hybrid nodes running in your on-premises environment in the same EKS cluster. EKS Hybrid Nodes uses a “bring your own infrastructure” approach where you are responsible for provisioning and managing the infrastructure and operating systems that you use for hybrid nodes. You can use your existing bare metal servers or virtualized infrastructure as the compute for hybrid nodes, and today Amazon Linux 2023, Ubuntu, and Red Hat Enterprise Linux (RHEL) are the operating systems that are supported by AWS for compatibility with hybrid nodes.

EKS Hybrid Nodes can be installed and connected to your EKS cluster with the EKS Hybrid Nodes CLI (nodeadm), which you run on each on-premises host. Alternatively you can include nodeadm and the hybrid node dependencies in your golden operating system images to automate hybrid node bootstrap, which is similar to the mechanism used for the Amazon EKS-optimized Amazon Machine Images (AMIs) for EC2 instances in the cloud. When hybrid nodes are connected to your EKS cluster, they use temporary AWS Identity and Access Management (IAM) credentials provisioned by AWS Systems Manager hybrid activations or IAM Roles Anywhere to securely connect hybrid nodes to your EKS control plane.

EKS Hybrid Nodes also support several Amazon EKS add-ons and features for cluster networking, observability, and pod credentials including CoreDNS, kube-proxy, Amazon Managed Service for Prometheus agent-less scrapers, AWS Distro for Open Telemetry, CloudWatch Observability Agent, IAM Roles for Service Accounts (IRSA), and EKS Pod Identities. For pod networking, Cilium and Calico Container Networking Interfaces (CNIs) are supported for use with hybrid nodes.

For detailed information on how EKS Hybrid Nodes works, see the EKS Hybrid Nodes user guide.

Architecture

Before using EKS Hybrid Nodes, you must understand the networking flows between the cloud-hosted Amazon EKS control plane and the hybrid nodes running in your environment. The node and pod networks that you use for hybrid nodes and the resources running on them must use IPv4 RFC-1918 Classless Inter-Domain Routing (CIDR)s. You pass the CIDRs for these on-premises node and pod networks when you create your hybrid nodes-enabled EKS cluster. Your VPC and on-premises routing tables must be configured with these networks for the end-to-end hybrid nodes traffic flow. For more information on the networking requirements for hybrid nodes, see Prepare networking for hybrid nodes in the Amazon EKS user guide.

 Hybrid networking architecture for EKS Hybrid Nodes

Figure 1: Hybrid networking architecture for EKS Hybrid Nodes

The following table summarizes the key parts of the networking architecture for hybrid nodes.

Environment Component Description
AWS Region EKS cluster configuration RemoteNodeNetwork of the EKS cluster configuration is necessary for EKS control plane to kubelet communication for Kubernetes operations, such as logs, exec, and port-forward.
AWS Region EKS cluster configuration RemotePodNetwork of the EKS cluster configuration is necessary for EKS control plane to webhook communication. It is recommended to configure your RemotePodNetwork, but if you are not running webhooks on hybrid nodes, then it is not strictly necessary.
AWS Region EKS cluster VPC Your VPC routing table must have routes for your RemoteNodeNetwork and RemotePodNetwork to the gateway you are using for traffic exiting the VPC. The gateway is commonly an AWS Transit Gateway or a Virtual Private Gateway (VGW).
AWS Region EKS cluster security group You must have inbound and outbound rules that allow the traffic for the RemoteNodeNetwork and RemotePodNetwork.
On-premises On-premises firewall You must allow inbound access for the EKS control plane and outbound access for RemoteNodeNetwork and RemotePodNetwork.
On-premises On-premises router Your on-premises router must be able to route traffic to your RemoteNodeNetwork and RemotePodNetwork.
On-premises Container Networking Interface (CNI) The overlay network CIDR that you configure in your CNI must be the same as RemotePodNetwork. If you are using host networking, then your node CIDR must be the same as RemoteNodeNetwork.

Walkthrough

In this walkthrough, we set up IAM credentials for hybrid nodes using Systems Manager hybrid activations, create a hybrid nodes-enabled EKS cluster, connect hybrid nodes to the EKS cluster, and install the Cilium CNI to make hybrid nodes ready to run applications. This walkthrough uses AWS Command Line Interface (AWS CLI) and AWS CloudFormation to create the EKS cluster, but you can alternatively use other interfaces including the AWS Management Console, eksctl CLI, or Terraform.

Prerequisites

The following prerequisites are necessary to complete this solution:

  • Hybrid network connectivity between your on-premises environment and AWS
  • Infrastructure in the form of physical or virtual machines
  • Operating system that is compatible with hybrid nodes
  • AWS CLI version 2.22.8 or later or 1.36.13 or later with appropriate credentials
  • eksctl CLI
  • The IAM user running the steps in this walkthrough must have IAM permissions for the following actions: iam:CreatePolicy, iam:CreateRole, iam:AttachRolePolicy, ssm:CreateActivation, and eks:CreateCluster

Prepare credentials for hybrid nodes

Like EKS nodes running on EC2 instances in the cloud, hybrid nodes need an IAM role to connect to the EKS control plane. Then, the IAM role for hybrid nodes is used with Systems Manager hybrid activations or IAM Roles Anywhere to provision temporary IAM credentials. Generally, Systems Manager hybrid activations are recommended if you do not have existing Public Key Infrastructure (PKI) and certificates for your on-premises environment. If you do have existing PKI and certificates, then you can use these with IAM Roles Anywhere.

The IAM role you use for hybrid nodes must have the following permissions.

  • Permissions for the hybrid nodes CLI (nodeadm) to use the eks:DescribeCluster action to gather information about the cluster used for connecting hybrid nodes to the cluster. If you do not enable the eks:DescribeCluster action, then you must pass your Kubernetes API endpoint, cluster CA bundle, and service IPv4 CIDR in the node configuration you pass to nodeadm when you run nodeadm init.
  • Permissions for the kubelet to use container images from Amazon Elastic Container Registry (Amazon ECR) as defined in the AmazonEC2ContainerRegistryPullOnly
  • If using Systems Manager, thenpermissions for nodeadm init to use Systems Manager hybrid activations as defined in the AmazonSSMManagedInstanceCore policy and permissions to use the ssm:DeregisterManagedInstance action and ssm:DescribeInstanceInformation action for nodeadm uninstall to deregister instances.

In these steps, we use AWS CLI and CloudFormation to create the IAM role for hybrid nodes with the permissions outlined previously. Then, we use the AWS CLI to create a Systems Manager hybrid activation with the hybrid node’s IAM role.

In these steps, we use AWS CLI and CloudFormation to create the IAM role for hybrid nodes with the permissions outlined previously. Then, we use the AWS CLI to create a Systems Manager hybrid activation with the hybrid node’s IAM role.

First, download the CloudFormation template to the machine where you run the AWS CLI.

curl -OL 'https://raw.githubusercontent.com/aws/eks-hybrid/refs/heads/main/example/hybrid-ssm-cfn.yaml'

By default, the CloudFormation template scopes down the permissions for the ssm:DeregisterManagedInstance such that the hybrid node’s IAM role can only deregister instances that are associated with the hybrid activation that you create for the cluster. The SSMDeregisterConditionTagKey and SSMDeregisterConditionTagValue used in the permissions for the hybrid node’s IAM role must correspond to tags that you apply when you create your Systems Manager hybrid activation, which is shown in a subsequent step.

# Define environment variables EKS_CLUSTER_NAME=my-hybrid-cluster AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query 'Account' --output text) AWS_REGION=${AWS_REGION:=us-west-2} EKS_CLUSTER_ARN=arn:aws:eks:${AWS_REGION}:${AWS_ACCOUNT_ID}:cluster/${EKS_CLUSTER_NAME} ROLE_NAME=AmazonEKSHybridNodesRole # Create cfn-ssm-parameters.json cat << EOF > cfn-ssm-parameters.json { "Parameters": { "RoleName": "$ROLE_NAME", "SSMDeregisterConditionTagKey": "EKSClusterARN", "SSMDeregisterConditionTagValue": "$EKS_CLUSTER_ARN" } } EOF

Deploy the CloudFormation stack. Replace AWS_REGION with your desired AWS Region where the hybrid activation is created. The Region for the hybrid activation must be the same as the Region for your EKS cluster.

aws cloudformation deploy \ --stack-name EKSHybridRoleSSM \ --region ${AWS_REGION} \ --template-file hybrid-ssm-cfn.yaml \ --parameter-overrides file://cfn-ssm-parameters.json \ --capabilities CAPABILITY_NAMED_IAM

After creating the hybrid nodes IAM role, the next step is to create a Systems Manager hybrid activation with the role. By default, Systems Manager hybrid activations are active for 24 hours and the max expiration is 30 days. You can specify an --expiration-date when you create your hybrid activation in timestamp format, such as 2024-08-01T00:00:00. When you use Systems Manager as your credential provider, the node name for your hybrid nodes is not configurable, and is auto-generated by Systems Manager with the format mi-012345678abcdefgh. You can view and manage the Systems Manager managed instances in the Systems Manager console under Fleet Manager.

Use the following command to create the Systems Manager hybrid activation, passing the IAM role created in the previous step in the --iam-role flag. Note the tags we apply when we create the hybrid activation corresponding to the trust policy configured for the hybrid node’s IAM role created in the previous step. Make sure to save the output of the Systems Manager create-activation command, which contains the activation code and activation ID that you use in a subsequent step when connecting hybrid nodes to your EKS cluster.

# Define environment variables EKS_CLUSTER_NAME=my-hybrid-cluster AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query 'Account' --output text) AWS_REGION=${AWS_REGION:=us-west-2} EKS_CLUSTER_ARN=arn:aws:eks:${AWS_REGION}:${AWS_ACCOUNT_ID}:cluster/${EKS_CLUSTER_NAME} ROLE_NAME=AmazonEKSHybridNodesRole # Create SSM hybrid activation aws ssm create-activation \ --region ${AWS_REGION} \ --default-instance-name eks-hybrid-nodes \ --description "Activation for EKS hybrid nodes" \ --iam-role ${ROLE_NAME} \ --tags Key=EKSClusterARN,Value=${EKS_CLUSTER_ARN} \ --registration-limit 5

Create the EKS cluster for hybrid nodes

In these steps, we use the AWS CLI and CloudFormation to create the EKS cluster IAM role and the hybrid nodes-enabled EKS cluster.

First, download the CloudFormation template to the machine where you run the AWS CLI.

curl -OL 'https://raw.githubusercontent.com/aws/eks-hybrid/refs/heads/main/example/hybrid-eks-cfn.yaml'

By default, the CloudFormation template creates the EKS cluster with private endpoint connectivity, which means the Kubernetes API endpoint can only be accessed through your VPC.

If you want to have public endpoint connectivity, then you can set ClusterEndpointConnectivity to Public in your CloudFormation parameters file.

In the following example CloudFormation parameters file, we use existing subnets that meet the hybrid nodes requirements. We are using subnets in a VPC that have an attachment with a Transit Gateway that is connected to the on-premises environment through Direct Connect. Amazon EKS attaches Elastic Network Interfaces (ENIs) to the provided subnets for the EKS control plane to VPC connectivity. The CloudFormation template also creates the security group that allows traffic to/from the RemoteNodeCIDR and RemotePodCIDR, and the EKS control plane.

Replace the values in the cfn-eks-parameters.json file with the values for your own environment.

cat << EOF > cfn-eks-parameters.json { "Parameters": { "ClusterName": "my-hybrid-cluster", "ClusterRoleName": "EKSHybridClusterRole", "SubnetId1": "subnet-0b65cdc4812345678", "SubnetId2": "subnet-02f526cd012345678", "VpcId": "vpc-0a5f3bee960d6ec71", "RemoteNodeCIDR": "10.80.150.0/24", "RemotePodCIDR": "10.80.2.0/23", "K8sVersion": "1.31" } } EOF

Deploy the CloudFormation stack. Replace AWS_REGION with your desired AWS Region where the cluster is created.

aws cloudformation deploy \ --stack-name EKSHybridCluster \ --region ${AWS_REGION} \ --template-file hybrid-eks-cfn.yaml \ --parameter-overrides file://cfn-eks-parameters.json \ --capabilities CAPABILITY_NAMED_IAM

Cluster provisioning takes several minutes. You can check the status of your stack with the following command.

aws cloudformation describe-stacks \ --stack-name EKSHybridCluster \ --region ${AWS_REGION} \ --query 'Stacks[].StackStatus'

When the EKS cluster is created, create an Amazon EKS access entry with the IAM role for your hybrid nodes to enable your nodes to join the cluster. For more information, see Prepare cluster access for hybrid nodes in the Amazon EKS user guide.

# Define environment variables EKS_CLUSTER_NAME=my-hybrid-cluster ROLE_NAME=AmazonEKSHybridNodesRole # Create access entry with type HYBRID_LINUX aws eks create-access-entry \ --cluster-name ${EKS_CLUSTER_NAME} \ --principal-arn ${ROLE_NAME} \ --type HYBRID_LINUX

Install and connect hybrid nodes to the EKS cluster

After creating the IAM role for hybrid nodes, the Systems Manager hybrid activation, and the hybrid nodes-enabled EKS cluster, you’re ready to create and attach hybrid nodes to your cluster. You can use any x86_64 or ARM physical or virtual machine (VM) as long as it satisfies the previous prerequisites. The hybrid nodes CLI, called nodeadm, is designed to streamline the lifecycle management of hybrid nodes, including installation, configuration, and registration. You may already be familiar with nodeadm if you’ve built custom AMIs for Amazon EKS based on the AL2023 Amazon EKS-optimized AMIs. Note that the cloud version of nodeadm used in the AL2023 Amazon EKS-optimized AMIs is different than the hybrid nodes nodeadm version, and you should use the appropriate version based on your deployment target.

The hybrid nodes CLI performs two functions of the bootstrap process. First, it installs the necessary dependencies on the host (kubelet, containerd, Systems Manager agent/IAM Roles Anywhere tool, etc.). Second, it configures and starts the dependencies so the node can join the EKS cluster. Amazon EKS provides Packer templates to create Ubuntu and RHEL images for hybrid nodes. If you’re going to create hybrid nodes repeatedly or want to automate the bootstrap process, then using prebuilt images can save time and remove the need to pull the dependencies as separate processes on each individual host.

To install the hybrid nodes dependencies, run the nodeadm install command. In the following example, we use Kubernetes version 1.31 and ssm as the credentials provider. EKS Hybrid Nodes support the same Kubernetes versions as Amazon EKS, including Kubernetes versions under standard and extended support.

Note that nodeadm must be run with a user that has root/sudo privileges on the host.

sudo nodeadm install 1.31 --credential-provider ssm

When your node has the necessary dependencies, create a nodeConfig.yaml with your configuration. The node configuration file includes two key details: the cluster information and the mechanism used for credentials (Systems Manager hybrid activations or IAM Roles Anywhere).

The following is an example of a nodeConfig.yaml file for hybrid nodes that use Systems Manager hybrid activations. Replace SSM_ACTIVATION_CODE and SSM_ACTIVATION_ID with the values from the output of the previous Systems Manager create activation step.

apiVersion: node.eks.aws/v1alpha1 kind: NodeConfig spec: cluster: name: my-hybrid-cluster region: us-west-2 hybrid: ssm: activationCode: SSM_ACTIVATION_CODE activationId: SSM_ACTIVATION_ID

To connect your hybrid nodes to your EKS cluster, run the nodeadm init command with your nodeConfig.yaml.

sudo nodeadm init -c file://nodeConfig.yaml

If the preceding command completes successfully and there are no errors in the kubelet logs, then your hybrid node has joined your EKS cluster. You can verify this in the EKS console by navigating to the Compute tab for your cluster (make sure IAM principal has permissions to view) or with kubectl get nodes.

NAME STATUS ROLES AGE VERSION mi-036ecab1709d75ee1 Not Ready <none> 1h v1.31.2-eks-94953ac

If you haven’t already installed an alternative CNI in the cluster, then the nodes you connect remain in a Not Ready state until a CNI is installed and running.

Install a CNI for hybrid nodes

Cilium and Calico are supported as the CNIs for hybrid nodes. You can manage these CNIs with your choice of tooling such as Helm. The Amazon VPC CNI is not compatible with hybrid nodes, and the VPC CNI is configured with anti-affinity for the eks.amazonaws.com/compute-type: hybrid label by default. For more information on operating Cilium and Calico with hybrid nodes, see Configure a CNI for hybrid nodes in the Amazon EKS user guide.

To make sure that the CNI DaemonSet only gets scheduled on hybrid nodes, you can configure affinity for the eks.amazonaws.com/compute-type=hybrid label, which is automatically applied by nodeadm when hybrid nodes join the cluster. This label enables control over workload placement, allowing you to determine which components, including the CNI, should or should not run on hybrid nodes.

The following cilium-values.yaml shows Helm values for installing Cilium. Note the affinity for the hybrid nodes label and the IP Address Management (IPAM) settings. In the example, we are using the cluster-pool overlay IPAM mode, which is where you configure your clusterPoolIPv4PodCIDRList, which should correspond to your RemotePodNetwork CIDR that you specified during EKS cluster creation. Replace 10.80.2.0/23 in the example with the value for your RemotePodNetwork. In this example, the clusterPoolIPv4MaskSize is set to 25, which allows for 128 IP addresses per node.

affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: eks.amazonaws.com/compute-type operator: In values: - hybrid ipam: mode: cluster-pool operator: clusterPoolIPv4MaskSize: 25 clusterPoolIPv4PodCIDRList: - 10.80.2.0/23 operator: unmanagedPodWatcher: restart: false

After creating the cilium-values.yaml file with your settings, you can install Cilium using Helm.

CILIUM_VERSION=1.16.4 helm repo add cilium https://helm.cilium.io/ helm install cilium cilium/cilium \ --version ${CILIUM_VERSION} \ --namespace kube-system \ --values cilium-values.yaml

After deploying the CNI, run kubectl get nodes again and verify that the node is in a Ready state.

NAME STATUS ROLES AGE VERSION mi-036ecab1709d75ee1 Ready <none> 1h v1.31.2-eks-94953ac

Ingress and load balancing for workloads running on hybrid nodes

For many use cases, workloads running on the Kubernetes cluster need to be exposed outside of the cluster for external resources to access them. In Kubernetes this is commonly accomplished by exposing Services through ingress and load balancers. With hybrid nodes, there are two general paths for application traffic. The first is for application traffic that originates in a Region and contacts workloads running on-premises on hybrid nodes. The second is for application traffic that originates from the on-premises environment and stays local to the on-premises environment.

For the first category of AWS Region-originating application traffic, you can use the AWS Load Balancer Controller and Application Load Balancer (ALB) or Network Load Balancer (NLB) with the target type ip for workloads on hybrid nodes connected with Direct Connect or Site-to-Site VPN. As the AWS Load Balancer Controller uses webhooks, you must configure your RemotePodNetwork when creating your EKS cluster if you run the AWS Load Balancer Controller on your hybrid nodes.

For the second category of local application traffic, there are a variety of partner and Kubernetes community options available for use with hybrid nodes. When navigating the options, consider existing technologies in your on-premises environment and your application requirements. Common options for on-premises environments include Cilium (BGP or L2-aware load balancing), Calico (BGP load balancing), MetalLB, NGINX, HAProxy, Apache APISIX, Emissary Ingress, and Citrix Ingress. There are also service mesh technologies such as Istio that provide similar capabilities among other functionality. Generally, Amazon EKS and hybrid nodes are 100% upstream Kubernetes compatible, and most Kubernetes options for ingress and load balancing can be used for your applications running on hybrid nodes.

Cleaning up

You can remove the resources created in the preceding steps to avoid incurring charges with the following commands. If you used different CloudFormation stack names, then replace EKSHybridRoleSSM and EKSHybridCluster with your stack names in the following commands.

aws cloudformation delete-stack --stack-name EKSHybridCluster aws cloudformation delete-stack --stack-name EKSHybridRoleSSM # to remove hybrid nodes components from your hosts sudo nodeadm uninstall --skip node-validation,pod-validation

Launch partners

A range of partners including Independent Software Vendors (ISVs), Independent Hardware Vendors (IHVs), and Operating System vendors (OSV) participated in the hybrid nodes launch. We are excited to work with them and within the Kubernetes community. The following are the list of partners who participated in this launch. Several of the ISVs listed validated their software solutions through Conformitron, a framework to validate third-party software with Amazon EKS and EKS Anywhere, extending their GitOps driven integrations to EKS Hybrid Nodes. Users can deploy the validated solutions that these partners provide to operate their hybrid nodes, addressing common production readiness areas such as secrets management, storage, and maintenance of third-party components across a distributed fleet of devices.

  • AccuKnox (ISV) AccuKnox is a cybersecurity company focused on providing zero-trust security solutions for cloud-native and Kubernetes environments. Their platform offers advanced runtime security, network segmentation, and compliance automation for Kubernetes deployments.
  • AMD(IHV) is a leading semiconductor company that has been making significant strides in the data center and cloud computing space with its EPYC processors. These high-performance CPUs are designed to deliver excellent price-performance ratios for compute-intensive workloads, making them an attractive option for Kubernetes deployments.
  • Aqua (ISV) is a leading provider of cloud-native security solutions, offering comprehensive protection for containerized and serverless environments. Their platform provides advanced security features for Kubernetes deployments, including runtime protection, vulnerability scanning, and compliance enforcement.
  • CIQ (Ctrl IQ) (OSV) is a company specializing in high-performance computing (HPC) solutions and enterprise support for Rocky Linux. They provide expertise in containerization and Kubernetes orchestration, particularly for scientific and technical computing workloads. CIQ’s solutions can help organizations optimize their Kubernetes deployments on AWS, especially for compute-intensive applications and HPC environments.
  • Continent 8 Technologies (IHV) is a global IT managed services provider specializing in secure hosting and connectivity solutions. Although not primarily a hardware vendor, they offer cloud services and infrastructure that can support Kubernetes deployments. Continent 8’s expertise in regulated markets, combined with their global network, can complement AWS services to provide robust and compliant hosting environments for Kubernetes clusters, particularly for industries with strict regulatory requirements.
  • Dell Technologies (IHV), a leading global technology company, offers a wide range of hardware solutions that can support Kubernetes deployments, including servers, storage, and networking equipment. Their PowerEdge servers and VxRail hyperconverged infrastructure provide robust platforms for running containerized workloads. Dell’s hardware solutions can be integrated with AWS services to create powerful hybrid cloud environments, enabling seamless Kubernetes deployments across on-premises and cloud infrastructure.
  • Dynatrace (ISV) is a leading software intelligence platform that provides application performance monitoring (APM) and observability solutions for cloud environments, including Kubernetes. Their artificial intelligence (AI)-powered platform offers deep visibility into containerized applications, microservices, and Kubernetes clusters running on AWS.
  • HashiCorp (ISV) offers a suite of powerful open source tools that enhance Kubernetes deployments on AWS. Their products, including Terraform for infrastructure as code, Vault for secrets management, and Consul for service networking, integrate seamlessly with Amazon EKS and other AWS services.
  • Kong (ISV) is a leading API gateway and service connectivity platform that offers robust solutions for Kubernetes environments. Their Kubernetes Ingress Controller and API management tools integrate seamlessly with Amazon EKS and other AWS services, providing advanced traffic control, security, and observability for microservices architectures.
  • Kubecost (ISV) is a software solution designed to provide real-time cost visibility and optimization for Kubernetes environments. Their platform offers detailed cost allocation, monitoring, and forecasting for containerized workloads running on Amazon EKS and other Kubernetes clusters.
  • NetApp (ISV), a leader in cloud data services and storage solutions, offers powerful tools for managing persistent storage in Kubernetes environments. Their Astra product line provides data management capabilities for containerized applications, including snapshots, backups, and migration features for Kubernetes workloads running on AWS.
  • New Relic (ISV) is a leading observability platform that provides comprehensive monitoring and performance management solutions for Kubernetes environments. Their platform offers deep visibility into containerized applications, microservices, and Kubernetes clusters running on Amazon EKS and other AWS services.
  • Nirmata (ISV) is a Kubernetes management platform that streamlines the deployment, operation, and governance of Kubernetes clusters across multiple environments, such as Their solution provides policy-based automation for Kubernetes, enabling organizations to enforce security and compliance standards consistently across Amazon EKS and other Kubernetes deployments.
  • PerfectScale (ISV) is an AI-powered optimization platform designed to enhance resource usage and cost efficiency in Kubernetes environments. Their solution provides intelligent recommendations for right-sizing containers and optimizing cluster resources on Amazon EKS and other Kubernetes deployments.
  • Pulumi(ISV) is a modern infrastructure as code platform that enables developers to define and manage cloud resources using familiar programming languages. Their solution provides powerful tools for deploying and managing Kubernetes clusters on AWS, including support for Amazon EKS and other managed Kubernetes services.
  • io (ISV) is a leading provider of API infrastructure solutions, specializing in service mesh and API gateway technologies for cloud-native environments. Their Gloo Platform offers advanced traffic management, security, and observability features for Kubernetes deployments on Amazon EKS and other AWS services.
  • Spectro Cloud (ISV) offers an innovative Kubernetes management platform that enables organizations to deploy and operate Kubernetes clusters across diverse environments, including AWS. Their solution provides a unique approach to cluster management, allowing teams to create customized Kubernetes stacks that combine the best of both worlds: the flexibility of open source and the manageability of enterprise products.
  • Sysdig (ISV) is a powerful container intelligence platform that provides deep visibility and security for Kubernetes environments, enabling DevOps teams to monitor, troubleshoot, and secure their containerized applications with ease.
  • Tetrate(ISV) is a leading provider of service mesh solutions, offering enterprise-grade infrastructure for modern, microservices-based applications. Their flagship product, Tetrate Service Bridge, extends Istio’s capabilities to provide comprehensive application connectivity, security, and observability for Kubernetes environments on Amazon EKS and across multi-cluster, multi-cloud deployments.

Conclusion

Running workloads with Kubernetes on-premises or at the edge typically takes time, effort, and maintenance to define and integrate tooling and processes with open source Kubernetes. This adds operational burden on teams and creates silos between the on-premises and cloud environments. You can reduce this toil with EKS Hybrid Nodes and can bring your on-premises deployments more in-line with how you run workloads in the cloud.

Whether you’re looking to modernize your on-premises applications, use existing on-premises hardware, or meet data residency requirements by keeping data in a particular country, you can use EKS Hybrid Nodes to efficiently run your on-premises workloads without having to deal with the operational overhead of managing Kubernetes control planes.

To learn more and get started with EKS Hybrid Nodes, visit the EKS Hybrid Nodes User Guide and check out the re:Invent 2024 session (KUB205) where we cover how hybrid nodes works, its features, and best practices.

Read Entire Article