This is the multi-page printable view of this section.
Click here to print.
Return to the regular view of this page.
Amazon EKS Anywhere
EKS Anywhere provides a means of managing Kubernetes clusters using the same operational excellence and practices that Amazon Web Services uses for its Amazon Elastic Kubernetes Service (Amazon EKS). Based on
EKS Distro, EKS Anywhere adds methods for deploying, using, and managing Kubernetes clusters that run in your own data centers. Its goal is to include full lifecycle management of multiple Kubernetes clusters that are capable of operating completely independently of any AWS services.
The tenets of the EKS Anywhere project are:
- Simple: Make using a Kubernetes distribution simple and boring (reliable and secure).
- Opinionated Modularity: Provide opinionated defaults about the best components to include with Kubernetes, but give customers the ability to swap them out
- Open: Provide open source tooling backed, validated and maintained by Amazon
- Ubiquitous: Enable customers and partners to integrate a Kubernetes distribution in the most common tooling.
- Stand Alone: Provided for use anywhere without AWS dependencies
- Better with AWS: Enable AWS customers to easily adopt additional AWS services
1 - Overview
Provides an overview of EKS Anywhere
EKS Anywhere creates a Kubernetes cluster on premises to a chosen provider.
Supported providers include Bare Metal (via Tinkerbell), CloudStack, and vSphere.
To manage that cluster, you can run cluster create and delete commands from an Ubuntu or Mac Administrative machine.
Creating a cluster involves downloading EKS Anywhere tools to an Administrative machine, then running the eksctl anywhere create cluster
command to deploy that cluster to the provider.
A temporary bootstrap cluster runs on the Administrative machine to direct the target cluster creation.
For a detailed description, see Cluster creation workflow
.
Here’s a diagram that explains the process visually.
EKS Anywhere Create Cluster

Next steps:
2 - Getting started
The Getting started section includes information on starting to set up your own EKS Anywhere local or production environment.
EKS Anywhere can be deployed as a simple, unsupported local environment or as a production-quality environment that can become a supported on-premises Kubernetes platform.
This section lists the different ways to set up and run EKS Anywhere.
When you install EKS Anywhere, choose an installation type based on: ease of maintenance, security, control, available resources, and expertise required to operate and manage a cluster.
Install EKS Anywhere
To create an EKS Anywhere cluster you’ll need to download the command line tool that is used to create and manage a cluster.
You can install it using the installation guide
Local environment
If you just want to try out EKS Anywhere, there is a single-system method for installing and running EKS Anywhere using Docker.
See EKS Anywhere local environment
.
Production environment
When evaluating a solution for a production environment
consider deploying EKS Anywhere on providers listed on the Create production cluster
page.
2.1 - Install EKS Anywhere
EKS Anywhere will create and manage Kubernetes clusters on multiple providers.
Currently we support creating development clusters locally using Docker and production clusters from providers listed on the Create production cluster
page.
Creating an EKS Anywhere cluster begins with setting up an Administrative machine where you will run Docker and add some binaries.
From there, you create the cluster for your chosen provider.
See Create cluster workflow
for an overview of the cluster creation process.
To create an EKS Anywhere cluster you will need eksctl
and the eksctl-anywhere
plugin.
This will let you create a cluster in multiple providers for local development or production workloads.
NOTE: For Snow provider, the Snow devices will come with a pre-configured Admin AMI which can be used to create an Admin instance with all the necessary binaries, dependencies and artifacts to create an EKS Anywhere cluster. Skip the below steps and see Create Snow production cluster
to get started with EKS Anywhere on Snow.
Administrative machine prerequisites
- Docker 20.x.x
- Mac OS 10.15 / Ubuntu 20.04.2 LTS (See Note on newer Ubuntu versions)
- 4 CPU cores
- 16GB memory
- 30GB free disk space
- Administrative machine must be on the same Layer 2 network as the cluster machines (Bare Metal provider only).
If you are using Ubuntu, use the Docker CE installation instructions to install Docker and not the Snap installation, as described here.
If you are using Ubuntu 21.10 or 22.04, you will need to switch from cgroups v2 to cgroups v1. For details, see Troubleshooting Guide.
If you are using Docker Desktop, you need to know that:
- For EKS Anywhere Bare Metal, Docker Desktop is not supported
- For EKS Anywhere vSphere, if you are using Mac OS Docker Desktop 4.4.2 or newer
"deprecatedCgroupv1": true
must be set in ~/Library/Group\ Containers/group.com.docker/settings.json
.
Via Homebrew (macOS and Linux)
Warning
EKS Anywhere only works on computers with x86 and amd64 process architecture.
It currently will not work on computers with Apple Silicon or Arm based processors.
You can install eksctl
and eksctl-anywhere
with homebrew
.
This package will also install kubectl
and the aws-iam-authenticator
which will be helpful to test EKS Anywhere clusters.
brew install aws/tap/eks-anywhere
Manually (macOS and Linux)
Install the latest release of eksctl
.
The EKS Anywhere plugin requires eksctl
version 0.66.0 or newer.
curl "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" \
--silent --location \
| tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin/
Install the eksctl-anywhere
plugin.
RELEASE_VERSION=$(curl https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml --silent --location | yq ".spec.latestVersion")
EKS_ANYWHERE_TARBALL_URL=$(curl https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml --silent --location | yq ".spec.releases[] | select(.version==\"$RELEASE_VERSION\").eksABinary.$(uname -s | tr A-Z a-z).uri")
curl $EKS_ANYWHERE_TARBALL_URL \
--silent --location \
| tar xz ./eksctl-anywhere
sudo mv ./eksctl-anywhere /usr/local/bin/
Install the kubectl
Kubernetes command line tool.
This can be done by following the instructions here
.
Or you can install the latest kubectl directly with the following.
export OS="$(uname -s | tr A-Z a-z)" ARCH=$(test "$(uname -m)" = 'x86_64' && echo 'amd64' || echo 'arm64')
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/${OS}/${ARCH}/kubectl"
sudo mv ./kubectl /usr/local/bin
sudo chmod +x /usr/local/bin/kubectl
Upgrade eksctl-anywhere
If you installed eksctl-anywhere
via homebrew you can upgrade the binary with
brew update
brew upgrade aws/tap/eks-anywhere
If you installed eksctl-anywhere
manually you should follow the installation steps to download the latest release.
You can verify your installed version with
Deploy a cluster
Once you have the tools installed you can deploy a local cluster or production cluster in the next steps.
2.2 - Create local cluster
EKS Anywhere docker provider deployments
EKS Anywhere supports a Docker provider for development and testing use cases only.
This allows you to try EKS Anywhere on your local system before deploying to a supported provider to create either:
- A single, standalone cluster or
- Multiple management/workload clusters on the same provider, as described in Cluster topologies
.
The management/workload topology is recommended for production clusters and can be tried out here using both
eksctl
and GitOps
tools.
Create a standalone cluster
Prerequisite Checklist
To install the EKS Anywhere binaries and see system requirements please follow the installation guide
.
Steps
-
Generate a cluster config
CLUSTER_NAME=mgmt
eksctl anywhere generate clusterconfig $CLUSTER_NAME \
--provider docker > $CLUSTER_NAME.yaml
The command above creates a file named eksa-cluster.yaml with the contents below in the path where it is executed.
The configuration specification is divided into two sections:
- Cluster
- DockerDatacenterConfig
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
name: mgmt
spec:
clusterNetwork:
cniConfig:
cilium: {}
pods:
cidrBlocks:
- 192.168.0.0/16
services:
cidrBlocks:
- 10.96.0.0/12
controlPlaneConfiguration:
count: 1
datacenterRef:
kind: DockerDatacenterConfig
name: mgmt
externalEtcdConfiguration:
count: 1
kubernetesVersion: "1.25"
managementCluster:
name: mgmt
workerNodeGroupConfigurations:
- count: 1
name: md-0
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: DockerDatacenterConfig
metadata:
name: mgmt
spec: {}
- Apart from the base configuration, you can add additional optional configuration to enable supported features:
-
Configure Curated Packages
The Amazon EKS Anywhere Curated Packages are only available to customers with the Amazon EKS Anywhere Enterprise Subscription. To request a free trial, talk to your Amazon representative or connect with one here
. Cluster creation will succeed if authentication is not set up, but some warnings may be generated. Detailed package configurations can be found here
.
If you are going to use packages, set up authentication. These credentials should have limited capabilities
:
export EKSA_AWS_ACCESS_KEY_ID="your*access*id"
export EKSA_AWS_SECRET_ACCESS_KEY="your*secret*key"
export EKSA_AWS_REGION="us-west-2"
-
Create Cluster:
Note The Amazon EKS Anywhere Curated Packages are only available to customers with the Amazon EKS Anywhere Enterprise Subscription. Due to this there might be some warnings in the CLI if proper authentication is not set up.
eksctl anywhere create cluster -f $CLUSTER_NAME.yaml
Example command output
Performing setup and validations
✅ validation succeeded {"validation": "docker Provider setup is valid"}
Creating new bootstrap cluster
Installing cluster-api providers on bootstrap cluster
Provider specific setup
Creating new workload cluster
Installing networking on workload cluster
Installing cluster-api providers on workload cluster
Moving cluster management from bootstrap to workload cluster
Installing EKS-A custom components (CRD and controller) on workload cluster
Creating EKS-A CRDs instances on workload cluster
Installing GitOps Toolkit on workload cluster
GitOps field not specified, bootstrap flux skipped
Deleting bootstrap cluster
🎉 Cluster created!
----------------------------------------------------------------------------------
The Amazon EKS Anywhere Curated Packages are only available to customers with the
Amazon EKS Anywhere Enterprise Subscription
----------------------------------------------------------------------------------
Installing curated packages controller on management cluster
secret/aws-secret created
job.batch/eksa-auth-refresher created
Note to install curated packages during cluster creation, use --install-packages packages.yaml
flag
-
Use the cluster
Once the cluster is created you can use it with the generated KUBECONFIG
file in your local directory
export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
kubectl get ns
Example command output
NAME STATUS AGE
capd-system Active 21m
capi-kubeadm-bootstrap-system Active 21m
capi-kubeadm-control-plane-system Active 21m
capi-system Active 21m
capi-webhook-system Active 21m
cert-manager Active 22m
default Active 23m
eksa-packages Active 23m
eksa-system Active 20m
kube-node-lease Active 23m
kube-public Active 23m
kube-system Active 23m
You can now use the cluster like you would any Kubernetes cluster.
Deploy the test application with:
kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
Verify the test application in the deploy test application section
.
Create management/workload clusters
To try the recommended EKS Anywhere topology
, you can create a management cluster and one or more workload clusters on the same Docker provider.
Prerequisite Checklist
To install the EKS Anywhere binaries and see system requirements please follow the installation guide
.
Create a management cluster
-
Generate a management cluster config (named mgmt
for this example):
CLUSTER_NAME=mgmt
eksctl anywhere generate clusterconfig $CLUSTER_NAME \
--provider docker > eksa-mgmt-cluster.yaml
-
Modify the management cluster config (eksa-mgmt-cluster.yaml
) you could use the same one described earlier or modify it to use GitOps, as shown below:
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
name: mgmt
namespace: default
spec:
bundlesRef:
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
name: bundles-1
namespace: eksa-system
clusterNetwork:
cniConfig:
cilium: {}
pods:
cidrBlocks:
- 192.168.0.0/16
services:
cidrBlocks:
- 10.96.0.0/12
controlPlaneConfiguration:
count: 1
datacenterRef:
kind: DockerDatacenterConfig
name: mgmt
externalEtcdConfiguration:
count: 1
gitOpsRef:
kind: FluxConfig
name: mgmt
kubernetesVersion: "1.25"
managementCluster:
name: mgmt
workerNodeGroupConfigurations:
- count: 1
name: md-1
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: DockerDatacenterConfig
metadata:
name: mgmt
namespace: default
spec: {}
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: FluxConfig
metadata:
name: mgmt
namespace: default
spec:
branch: main
clusterConfigPath: clusters/mgmt
github:
owner: <your github account, such as example for https://github.com/example>
personal: true
repository: <your github repo, such as test for https://github.com/example/test>
systemNamespace: flux-system
---
-
Configure Curated Packages
The Amazon EKS Anywhere Curated Packages are only available to customers with the Amazon EKS Anywhere Enterprise Subscription. To request a free trial, talk to your Amazon representative or connect with one here
. Cluster creation will succeed if authentication is not set up, but some warnings may be generated. Detailed package configurations can be found here
.
If you are going to use packages, set up authentication. These credentials should have limited capabilities
:
export EKSA_AWS_ACCESS_KEY_ID="your*access*id"
export EKSA_AWS_SECRET_ACCESS_KEY="your*secret*key"
-
Create cluster
eksctl anywhere create cluster \
# --install-packages packages.yaml \ # uncomment to install curated packages at cluster creation
-f eksa-mgmt-cluster.yaml
-
Once the cluster is created you can use it with the generated KUBECONFIG
file in your local directory:
export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
-
Check the initial cluster’s CRD:
To ensure you are looking at the initial cluster, list the CRD to see that the name of its management cluster is itself:
kubectl get clusters mgmt -o yaml
Example command output
...
kubernetesVersion: "1.25"
managementCluster:
name: mgmt
workerNodeGroupConfigurations:
...
Create separate workload clusters
Follow these steps to have your management cluster create and manage separate workload clusters.
-
Generate a workload cluster config:
CLUSTER_NAME=w01
eksctl anywhere generate clusterconfig $CLUSTER_NAME \
--provider docker > eksa-w01-cluster.yaml
Refer to the initial config described earlier for the required and optional settings.
NOTE: Ensure workload cluster object names (Cluster
, DockerDatacenterConfig
, etc.) are distinct from management cluster object names. Be sure to set the managementCluster
field to identify the name of the management cluster.
-
Create a workload cluster in one of the following ways:
-
GitOps: See Manage separate workload clusters with GitOps
-
Terraform: See Manage separate workload clusters with Terraform
-
eksctl CLI: Useful for temporary cluster configurations. To create a workload cluster with eksctl
, run:
eksctl anywhere create cluster \
-f eksa-w01-cluster.yaml \
# --install-packages packages.yaml \ # uncomment to install curated packages at cluster creation
--kubeconfig mgmt/mgmt-eks-a-cluster.kubeconfig
As noted earlier, adding the --kubeconfig
option tells eksctl
to use the management cluster identified by that kubeconfig file to create a different workload cluster.
-
To check the workload cluster, get the workload cluster credentials and run a test workload:
-
If your workload cluster was created with eksctl
,
change your credentials to point to the new workload cluster (for example, w01
), then run the test application with:
export CLUSTER_NAME=w01
export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
-
If your workload cluster was created with GitOps or Terraform, you can get credentials and run the test application as follows:
kubectl get secret -n eksa-system w01-kubeconfig -o jsonpath=‘{.data.value}' | base64 —decode > w01.kubeconfig
export KUBECONFIG=w01.kubeconfig
kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
NOTE: For Docker, you must modify the server
field of the kubeconfig file by replacing the IP with 127.0.0.1
and the port with its value.
The port’s value can be found by running docker ps
and checking the workload cluster’s load balancer.
-
Add more workload clusters:
To add more workload clusters, go through the same steps for creating the initial workload, copying the config file to a new name (such as eksa-w02-cluster.yaml
), modifying resource names, and running the create cluster command again.
Next steps:
-
See the Cluster management
section for more information on common operational tasks like scaling and deleting the cluster.
-
See the Package management
section for more information on post-creation curated packages installation.
2.3 - Create production cluster
EKS Anywhere allows you to provision and manage Amazon EKS on your own infrastructure.
To get started with different production-quality EKS Anywhere providers, choose from the providers below:
2.3.1 - Create Bare Metal production cluster
Create a production-quality cluster on Bare Metal
EKS Anywhere supports a Bare Metal provider for production grade EKS Anywhere deployments.
EKS Anywhere allows you to provision and manage Kubernetes clusters based on Amazon EKS software on your own infrastructure.
This document walks you through setting up EKS Anywhere on Bare Metal as a standalone, self-managed cluster or combined set of management/workload clusters.
See Cluster topologies
for details.
Prerequisite checklist
EKS Anywhere needs:
Also, see the Ports and protocols
page for information on ports that need to be accessible from control plane, worker, and Admin machines.
Steps
The following steps are divided into two sections:
- Create an initial cluster (used as a management or self-managed cluster)
- Create zero or more workload clusters from the management cluster
Create an initial cluster
Follow these steps to create an EKS Anywhere cluster that can be used either as a management cluster or as a self-managed cluster (for running workloads itself).
-
Set an environment variables for your cluster name
-
Generate a cluster config file for your Bare Metal provider (using tinkerbell as the provider type).
eksctl anywhere generate clusterconfig $CLUSTER_NAME --provider tinkerbell > eksa-mgmt-cluster.yaml
-
Modify the cluster config (eksa-mgmt-cluster.yaml
) by referring to the Bare Metal configuration
reference documentation.
-
Set License Environment Variable
If you are creating a licensed cluster, set and export the license variable (see License cluster
if you are licensing an existing cluster):
export EKSA_LICENSE='my-license-here'
After you have created your eksa-mgmt-cluster.yaml
and set your credential environment variables, you will be ready to create the cluster.
-
Configure Curated Packages
The Amazon EKS Anywhere Curated Packages are only available to customers with the Amazon EKS Anywhere Enterprise Subscription. To request a free trial, talk to your Amazon representative or connect with one here
. Cluster creation will succeed if authentication is not set up, but some warnings may be generated. Detailed package configurations can be found here
.
If you are going to use packages, set up authentication. These credentials should have limited capabilities
:
export EKSA_AWS_ACCESS_KEY_ID="your*access*id"
export EKSA_AWS_SECRET_ACCESS_KEY="your*secret*key"
export EKSA_AWS_REGION="us-west-2"
-
Create the cluster, using the hardware.csv
file you made in Bare Metal preparation
:
eksctl anywhere create cluster \
--hardware-csv hardware.csv \
# --install-packages packages.yaml \ # uncomment to install curated packages at cluster creation
-f eksa-mgmt-cluster.yaml
-
Once the cluster is created you can use it with the generated KUBECONFIG
file in your local directory:
export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
-
Check the cluster nodes:
To check that the cluster completed, list the machines to see the control plane and worker nodes:
Example command output:
NAMESPACE NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION
eksa-system mgmt-47zj8 mgmt eksa-node01 tinkerbell://eksa-system/eksa-node01 Running 1h v1.23.7-eks-1-23-4
eksa-system mgmt-md-0-7f79df46f-wlp7w mgmt eksa-node02 tinkerbell://eksa-system/eksa-node02 Running 1h v1.23.7-eks-1-23-4
...
-
Check the cluster:
You can now use the cluster as you would any Kubernetes cluster.
To try it out, run the test application with:
export CLUSTER_NAME=mgmt
export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
Verify the test application in Deploy test workload
.
Create separate workload clusters
Follow these steps if you want to use your initial cluster to create and manage separate workload clusters.
-
Generate a workload cluster config:
CLUSTER_NAME=w01
eksctl anywhere generate clusterconfig $CLUSTER_NAME \
--provider tinkerbell > eksa-w01-cluster.yaml
Refer to the initial config described earlier for the required and optional settings.
Ensure workload cluster object names (Cluster
, TinkerbellDatacenterConfig
, TinkerbellMachineConfig
, etc.) are distinct from management cluster object names. Keep the tinkerbellIP of workload cluster the same as tinkerbellIP of the management cluster.
-
Be sure to set the managementCluster
field to identify the name of the management cluster.
For example, the management cluster, mgmt is defined for our workload cluster w01 as follows:
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
name: w01
spec:
managementCluster:
name: mgmt
-
Set License Environment Variable
Add a license to any cluster for which you want to receive paid support. If you are creating a licensed cluster, set and export the license variable (see License cluster
if you are licensing an existing cluster):
export EKSA_LICENSE='my-license-here'
-
Create a workload cluster
To create a new workload cluster from your management cluster run this command, identifying:
- The workload cluster YAML file
- The initial cluster’s credentials (this causes the workload cluster to be managed from the management cluster)
With hardware CSV
eksctl anywhere create cluster \
-f eksa-w01-cluster.yaml \
# --install-packages packages.yaml \ # uncomment to install curated packages at cluster creation
--hardware-csv <hardware.csv>
--kubeconfig mgmt/mgmt-eks-a-cluster.kubeconfig
Without hardware CSV
eksctl anywhere create cluster \
-f eksa-w01-cluster.yaml \
# --install-packages packages.yaml \ # uncomment to install curated packages at cluster creation
--kubeconfig mgmt/mgmt-eks-a-cluster.kubeconfig
As noted earlier, adding the --kubeconfig
option tells eksctl
to use the management cluster identified by that kubeconfig file to create a different workload cluster.
-
Check the workload cluster:
You can now use the workload cluster as you would any Kubernetes cluster.
Change your credentials to point to the new workload cluster (for example, mgmt-w01
), then run the test application with:
export CLUSTER_NAME=mgmt-w01
export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
Verify the test application in the deploy test application section.
-
Add more workload clusters:
To add more workload clusters, go through the same steps for creating the initial workload, copying the config file to a new name (such as eksa-w02-cluster.yaml
), modifying resource names, and running the create cluster command again.
Next steps:
-
See the Cluster management
section for more information on common operational tasks like deleting the cluster.
-
See the Package management
section for more information on post-creation curated packages installation.
2.3.2 - Create CloudStack production cluster
Create a production-quality cluster on CloudStack
EKS Anywhere supports a CloudStack provider for production grade EKS Anywhere deployments.
This document walks you through setting up EKS Anywhere on CloudStack in a way that:
- Deploys an initial cluster on your CloudStack environment. That cluster can be used as a standalone cluster (to run workloads) or a management cluster (to create and manage other clusters)
- Deploys zero or more workload clusters from the management cluster
If your initial cluster is a management cluster, it is intended to stay in place so you can use it later to modify, upgrade, and delete workload clusters.
Using a management cluster makes it faster to provision and delete workload clusters.
Also it lets you keep CloudStack credentials for a set of clusters in one place: on the management cluster.
The alternative is to simply use your initial cluster to run workloads.
See Cluster topologies
for details.
Important
Creating an EKS Anywhere management cluster is the recommended model.
Separating management features into a separate, persistent management cluster
provides a cleaner model for managing the lifecycle of workload clusters (to create, upgrade, and delete clusters), while workload clusters run user applications.
This approach also reduces provider permissions for workload clusters.
Prerequisite Checklist
EKS Anywhere needs to:
Also, see the Ports and protocols
page for information on ports that need to be accessible from control plane, worker, and Admin machines.
Steps
The following steps are divided into two sections:
- Create an initial cluster (used as a management or standalone cluster)
- Create zero or more workload clusters from the management cluster
Create an initial cluster
Follow these steps to create an EKS Anywhere cluster that can be used either as a management cluster or as a standalone cluster (for running workloads itself).
-
Generate an initial cluster config (named mgmt
for this example):
export CLUSTER_NAME=mgmt
eksctl anywhere generate clusterconfig $CLUSTER_NAME \
--provider cloudstack > eksa-mgmt-cluster.yaml
-
Create credential file
Create a credential file (for example, cloud-config
) and add the credentials needed to access your CloudStack environment. The file should include:
- api-key: Obtained from CloudStack
- secret-key: Obtained from CloudStack
- api-url: The URL to your CloudStack API endpoint
For example:
[Global]
api-key = -Dk5uB0DE3aWng
secret-key = -0DQLunsaJKxCEEHn44XxP80tv6v_RB0DiDtdgwJ
api-url = http://172.16.0.1:8080/client/api
You can have multiple credential entries.
To match this example, you would enter global
as the credentialsRef in the cluster config file for your CloudStack availability zone. You can configure multiple credentials for multiple availability zones.
-
Modify the initial cluster config (eksa-mgmt-cluster.yaml
) as follows:
- Refer to Cloudstack configuration
for information on configuring this cluster config for a CloudStack provider.
- Add Optional
configuration settings as needed.
- Create at least two control plane nodes, three worker nodes, and three etcd nodes for a production cluster, to provide high availability and rolling upgrades.
-
Set Environment Variables
Convert the credential file into base64 and set the following environment variable to that value:
export EKSA_CLOUDSTACK_B64ENCODED_SECRET=$(base64 -i cloud-config)
-
Set License Environment Variable
Add a license to any cluster for which you want to receive paid support. If you are creating a licensed cluster, set and export the license variable (see License cluster
if you are licensing an existing cluster):
export EKSA_LICENSE='my-license-here'
-
Configure Curated Packages
The Amazon EKS Anywhere Curated Packages are only available to customers with the Amazon EKS Anywhere Enterprise Subscription. To request a free trial, talk to your Amazon representative or connect with one here
. Cluster creation will succeed if authentication is not set up, but some warnings may be generated. Detailed package configurations can be found here
.
If you are going to use packages, set up authentication. These credentials should have limited capabilities
:
export EKSA_AWS_ACCESS_KEY_ID="your*access*id"
export EKSA_AWS_SECRET_ACCESS_KEY="your*secret*key"
export EKSA_AWS_REGION="us-west-2"
-
Disable Kubevip load balancer
Skip this step if you want to use the Kubevip load balancer with your cluster. If you want to use a different load balancer, you can disable Kubevip as follows:
export CLOUDSTACK_KUBE_VIP_DISABLED=true
-
Create cluster
eksctl anywhere create cluster \
# --install-packages packages.yaml \ # uncomment to install curated packages at cluster creation
-f eksa-mgmt-cluster.yaml
-
Once the cluster is created you can use it with the generated KUBECONFIG
file in your local directory:
export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
-
Check the cluster nodes:
To check that the cluster completed, list the machines to see the control plane, etcd, and worker nodes:
Example command output
NAMESPACE NAME PROVIDERID PHASE VERSION
eksa-system mgmt-b2xyz cloudstack:/xxxxx Running v1.23.1-eks-1-21-5
eksa-system mgmt-etcd-r9b42 cloudstack:/xxxxx Running
eksa-system mgmt-md-8-6xr-rnr cloudstack:/xxxxx Running v1.23.1-eks-1-21-5
...
The etcd machine doesn’t show the Kubernetes version because it doesn’t run the kubelet service.
-
Check the initial cluster’s CRD:
To ensure you are looking at the initial cluster, list the CRD to see that the name of its management cluster is itself:
kubectl get clusters mgmt -o yaml
Example command output
...
kubernetesVersion: "1.23"
managementCluster:
name: mgmt
workerNodeGroupConfigurations:
...
Note
The initial cluster is now ready to deploy workload clusters.
However, if you just want to use it to run workloads, you can deploy pod workloads directly on the initial cluster without deploying a separate workload cluster and skip the section on running separate workload clusters.
To make sure the cluster is ready to run workloads, run the test application in the
Deploy test workload section.
Create separate workload clusters
Follow these steps if you want to use your initial cluster to create and manage separate workload clusters.
-
Generate a workload cluster config:
CLUSTER_NAME=w01
eksctl anywhere generate clusterconfig $CLUSTER_NAME \
--provider cloudstack > eksa-w01-cluster.yaml
-
Modify the workload cluster config (eksa-w01-cluster.yaml
) as follows.
Refer to the initial config described earlier for the required and optional settings. In particular:
- Ensure workload cluster object names (
Cluster
, CloudDatacenterConfig
, CloudStackMachineConfig
, etc.) are distinct from management cluster object names.
-
Be sure to set the managementCluster
field to identify the name of the management cluster.
For example, the management cluster, mgmt is defined for our workload cluster w01 as follows:
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
name: w01
spec:
managementCluster:
name: mgmt
-
Set License Environment Variable
Add a license to any cluster for which you want to receive paid support. If you are creating a licensed cluster, set and export the license variable (see License cluster
if you are licensing an existing cluster):
export EKSA_LICENSE='my-license-here'
-
Create a workload cluster
To create a new workload cluster from your management cluster run this command, identifying:
- The workload cluster YAML file
- The initial cluster’s credentials (this causes the workload cluster to be managed from the management cluster)
eksctl anywhere create cluster \
-f eksa-w01-cluster.yaml \
# --install-packages packages.yaml \ # uncomment to install curated packages at cluster creation
--kubeconfig mgmt/mgmt-eks-a-cluster.kubeconfig
As noted earlier, adding the --kubeconfig
option tells eksctl
to use the management cluster identified by that kubeconfig file to create a different workload cluster.
-
Check the workload cluster:
You can now use the workload cluster as you would any Kubernetes cluster.
Change your credentials to point to the new workload cluster (for example, mgmt-w01
), then run the test application with:
export CLUSTER_NAME=mgmt-w01
export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
Verify the test application in the deploy test application section.
-
Add more workload clusters:
To add more workload clusters, go through the same steps for creating the initial workload, copying the config file to a new name (such as eksa-w02-cluster.yaml
), modifying resource names, and running the create cluster command again.
Next steps:
-
See the Cluster management
section for more information on common operational tasks like scaling and deleting the cluster.
-
See the Package management
section for more information on post-creation curated packages installation.
2.3.3 - Create Nutanix production cluster
Create a production-quality cluster on Nutanix Cloud Infrastructure with AHV
EKS Anywhere supports a Nutanix Cloud Infrastructure (NCI) provider for production grade EKS Anywhere deployments.
This document walks you through setting up EKS Anywhere on Nutanix Cloud Infrastructure with AHV in a way that:
- Deploys an initial cluster in your Nutanix environment. That cluster can be used as a self-managed cluster (to run workloads) or a management cluster (to create and manage other clusters)
- Deploys zero or more workload clusters from the management cluster
If your initial cluster is a management cluster, it is intended to stay in place so you can use it later to modify, upgrade, and delete workload clusters.
Using a management cluster makes it faster to provision and delete workload clusters.
It also lets you keep NCI credentials for a set of clusters in one place: on the management cluster.
The alternative is to simply use your initial cluster to run workloads.
See Cluster topologies
for details.
Important
Creating an EKS Anywhere management cluster is the recommended model.
Separating management features into a separate, persistent management cluster
provides a cleaner model for managing the lifecycle of workload clusters (to create, upgrade, and delete clusters), while workload clusters run user applications.
This approach also reduces provider permissions for workload clusters.
Prerequisite Checklist
EKS Anywhere needs to:
Also, see the Ports and protocols
page for information on ports that need to be accessible from control plane, worker, and Admin machines.
Steps
The following steps are divided into two sections:
- Create an initial cluster (used as a management or self-managed cluster)
- Create zero or more workload clusters from the management cluster
Create an initial cluster
Follow these steps to create an EKS Anywhere cluster that can be used either as a management cluster or as a self-managed cluster (for running workloads itself).
-
Generate an initial cluster config (named mgmt
for this example):
CLUSTER_NAME=mgmt
eksctl anywhere generate clusterconfig $CLUSTER_NAME \
--provider nutanix > eksa-mgmt-cluster.yaml
-
Modify the initial cluster config (eksa-mgmt-cluster.yaml
) as follows:
- Refer to Nutanix configuration
for information on configuring this cluster config for a Nutanix provider.
- Add Optional
configuration settings as needed.
- Create at least three control plane nodes, and three worker nodes for a production cluster, to provide high availability and rolling upgrades.
-
Set Credential Environment Variables
Before you create the initial cluster, you will need to set and export these environment variables for your Nutanix Prism Central user name and password.
Make sure you use single quotes around the values so that your shell does not interpret the values:
export EKSA_NUTANIX_USERNAME='billy'
export EKSA_NUTANIX_PASSWORD='t0p$ecret'
Note
If you have a username in the form of domain_name/user_name
, you must specify it as user_name@domain_name
to
avoid errors in cluster creation. For example, nutanix.local/admin
should be specified as admin@nutanix.local
.
-
Set License Environment Variable
Add a license to any cluster for which you want to receive paid support. If you are creating a licensed cluster, set and export the license variable (see License cluster
if you are licensing an existing cluster):
export EKSA_LICENSE='my-license-here'
After you have created your eksa-mgmt-cluster.yaml
and set your credential environment variables, you will be ready to create the cluster.
-
Configure Curated Packages
The Amazon EKS Anywhere Curated Packages are only available to customers with the Amazon EKS Anywhere Enterprise Subscription. To request a free trial, talk to your Amazon representative or connect with one here
. Cluster creation will succeed if authentication is not set up, but some warnings may be generated. Detailed package configurations can be found here
.
If you are going to use packages, set up authentication. These credentials should have limited capabilities
:
export EKSA_AWS_ACCESS_KEY_ID="your*access*id"
export EKSA_AWS_SECRET_ACCESS_KEY="your*secret*key"
export EKSA_AWS_REGION="us-west-2"
-
Create cluster
eksctl anywhere create cluster \
# --install-packages packages.yaml \ # uncomment to install curated packages at cluster creation
-f eksa-mgmt-cluster.yaml
-
Once the cluster is created, you can access it with the generated KUBECONFIG
file in your local directory:
export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
-
Check the cluster nodes:
To check that the cluster is ready, list the machines to see the control plane, and worker nodes:
kubectl get machines -n eksa-system
Example command output
NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION
mgmt-4gtt2 mgmt mgmt-control-plane-1670343878900-2m4ln nutanix://xxxx Running 11m v1.24.7-eks-1-24-4
mgmt-d42xn mgmt mgmt-control-plane-1670343878900-jbfxt nutanix://xxxx Running 11m v1.24.7-eks-1-24-4
mgmt-md-0-9868m mgmt mgmt-md-0-1670343878901-lkmxw nutanix://xxxx Running 11m v1.24.7-eks-1-24-4
mgmt-md-0-njpk2 mgmt mgmt-md-0-1670343878901-9clbz nutanix://xxxx Running 11m v1.24.7-eks-1-24-4
mgmt-md-0-p4gp2 mgmt mgmt-md-0-1670343878901-mbktx nutanix://xxxx Running 11m v1.24.7-eks-1-24-4
mgmt-zkwrr mgmt mgmt-control-plane-1670343878900-jrdkk nutanix://xxxx Running 11m v1.24.7-eks-1-24-4
-
Check the initial cluster’s CRD:
To ensure you are looking at the initial cluster, list the cluster CRD to see that the name of its management cluster is itself:
kubectl get clusters mgmt -o yaml
Example command output
...
kubernetesVersion: "1.25"
managementCluster:
name: mgmt
workerNodeGroupConfigurations:
...
Note
The initial cluster is now ready to deploy workload clusters.
However, if you just want to use it to run workloads, you can deploy pod workloads directly on the initial cluster without deploying a separate workload cluster and skip the section on running separate workload clusters.
To make sure the cluster is ready to run workloads, run the test application in the
Deploy test workload section.
Create separate workload clusters
Follow these steps if you want to use your initial cluster to create and manage separate workload clusters.
-
Generate a workload cluster config:
CLUSTER_NAME=w01
eksctl anywhere generate clusterconfig $CLUSTER_NAME \
--provider nutanix > eksa-w01-cluster.yaml
Refer to the initial config described earlier for the required and optional settings.
Ensure workload cluster object names (Cluster
, NutanixDatacenterConfig
, NutanixMachineConfig
, etc.) are distinct from management cluster object names.
-
Be sure to set the managementCluster
field to identify the name of the management cluster.
For example, the management cluster, mgmt is defined for our workload cluster w01 as follows:
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
name: w01
spec:
managementCluster:
name: mgmt
-
Set License Environment Variable
Add a license to any cluster for which you want to receive paid support. If you are creating a licensed cluster, set and export the license variable (see License cluster
if you are licensing an existing cluster):
export EKSA_LICENSE='my-license-here'
-
Create a workload cluster
To create a new workload cluster from your management cluster run this command, identifying:
- The workload cluster YAML file
- The initial cluster’s kubeconfig (this causes the workload cluster to be managed from the management cluster)
eksctl anywhere create cluster \
-f eksa-w01-cluster.yaml \
# --install-packages packages.yaml \ # uncomment to install curated packages at cluster creation
--kubeconfig mgmt/mgmt-eks-a-cluster.kubeconfig
As noted earlier, adding the --kubeconfig
option tells eksctl
to use the management cluster identified by that kubeconfig file to create a different workload cluster.
-
Check the workload cluster:
You can now use the workload cluster as you would any Kubernetes cluster.
Change your kubeconfig to point to the new workload cluster (for example, w01
), then run the test application with:
export CLUSTER_NAME=w01
export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
Verify the test application in the deploy test application section.
-
Add more workload clusters:
To add more workload clusters, go through the same steps for creating the initial workload, copying the config file to a new name (such as eksa-w02-cluster.yaml
), modifying resource names, and running the create cluster command again.
Next steps:
-
See the Cluster management
section for more information on common operational tasks like scaling and deleting the cluster.
-
See the Package management
section for more information on post-creation curated packages installation.
2.3.4 - Create Snow production cluster
Create a production-quality cluster on AWS Snow
EKS Anywhere supports an AWS Snow provider for production grade EKS Anywhere deployments.
This document walks you through setting up EKS Anywhere on Snow as a standalone, self-managed cluster or combined set of management/workload clusters.
See Cluster topologies
for details.
Prerequisite checklist
EKS Anywhere on Snow needs:
Also, see the Ports and protocols
page for information on ports that need to be accessible from control plane, worker, and Admin machines.
Steps
The following steps are divided into two sections:
- Create an initial cluster (used as a management or standalone cluster)
- Create zero or more workload clusters from the management cluster
Create an initial cluster
Follow these steps to create an EKS Anywhere cluster that can be used either as a management cluster or as a standalone cluster (for running workloads itself).
-
Set an environment variables for your cluster name
-
Generate a cluster config file for your Snow provider
eksctl anywhere generate clusterconfig $CLUSTER_NAME --provider snow > eksa-mgmt-cluster.yaml
-
Optionally import images to private registry
This optional step imports EKS Anywhere artifacts and release bundle to a local registry. This is required for air-gapped installation.
eksctl anywhere import images \
--input /usr/lib/eks-a/artifacts/artifacts.tar.gz \
--bundles /usr/lib/eks-a/manifests/bundle-release.yaml \
--registry $PRIVATE_REGISTRY_ENDPOINT \
--insecure=true
-
Modify the cluster config (eksa-mgmt-cluster.yaml
) as follows:
- Refer to the Snow configuration
for information on configuring this cluster config for a Snow provider.
- Add Optional
configuration settings as needed.
-
Set License Environment Variable
If you are creating a licensed cluster, set and export the license variable (see License cluster
if you are licensing an existing cluster):
export EKSA_LICENSE='my-license-here'
-
Configure Curated Packages
The Amazon EKS Anywhere Curated Packages are only available to customers with the Amazon EKS Anywhere Enterprise Subscription. To request a free trial, talk to your Amazon representative or connect with one here
. Cluster creation will succeed if authentication is not set up, but some warnings may be generated. Detailed package configurations can be found here
.
If you are going to use packages, set up authentication. These credentials should have limited capabilities
:
export EKSA_AWS_ACCESS_KEY_ID="your*access*id"
export EKSA_AWS_SECRET_ACCESS_KEY="your*secret*key"
export EKSA_AWS_REGION="us-west-2"
Curated packages are not yet supported on air-gapped installation.
-
Set Credential Environment Variables
Before you create the initial cluster, you will need to use the credentials
and ca-bundles
files that are in the Admin instance, and export these environment variables for your AWS Snowball device credentials.
Make sure you use single quotes around the values so that your shell does not interpret the values:
export EKSA_AWS_CREDENTIALS_FILE='/PATH/TO/CREDENTIALS/FILE'
export EKSA_AWS_CA_BUNDLES_FILE='/PATH/TO/CABUNDLES/FILE'
After you have created your eksa-mgmt-cluster.yaml
and set your credential environment variables, you will be ready to create the cluster.
-
Create cluster
a. For none air-gapped environment
eksctl anywhere create cluster \
-f eksa-mgmt-cluster.yaml
b. For air-gapped environment
eksctl anywhere create cluster \
-f eksa-mgmt-cluster.yaml \
--bundles-override /usr/lib/eks-a/manifests/bundle-release.yaml
-
Once the cluster is created you can use it with the generated KUBECONFIG
file in your local directory:
export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
-
Check the cluster nodes:
To check that the cluster completed, list the machines to see the control plane and worker nodes:
Example command output:
NAMESPACE NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION
eksa-system mgmt-etcd-dsxb5 mgmt aws-snow:///192.168.1.231/s.i-8b0b0631da3b8d9e4 Running 4m59s
eksa-system mgmt-md-0-7b7c69cf94-99sll mgmt mgmt-md-0-1-58nng aws-snow:///192.168.1.231/s.i-8ebf6b58a58e47531 Running 4m58s v1.24.9-eks-1-24-7
eksa-system mgmt-srrt8 mgmt mgmt-control-plane-1-xs4t9 aws-snow:///192.168.1.231/s.i-8414c7fcabcf3d7c1 Running 4m58s v1.24.9-eks-1-24-7
...
-
Check the cluster:
You can now use the cluster as you would any Kubernetes cluster.
To try it out, run the test application with:
export CLUSTER_NAME=mgmt
export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
Verify the test application in Deploy test workload
.
Create separate workload clusters
Follow these steps if you want to use your initial cluster to create and manage separate workload clusters.
-
Generate a workload cluster config:
CLUSTER_NAME=w01
eksctl anywhere generate clusterconfig $CLUSTER_NAME \
--provider snow > eksa-w01-cluster.yaml
Refer to the initial config described earlier for the required and optional settings.
NOTE: Ensure workload cluster object names (Cluster
, SnowDatacenterConfig
, SnowMachineConfig
, etc.) are distinct from management cluster object names.
-
Be sure to set the managementCluster
field to identify the name of the management cluster.
For example, the management cluster, mgmt is defined for our workload cluster w01 as follows:
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
name: w01
spec:
managementCluster:
name: mgmt
-
Set License Environment Variable
Add a license to any cluster for which you want to receive paid support. If you are creating a licensed cluster, set and export the license variable (see License cluster
if you are licensing an existing cluster):
export EKSA_LICENSE='my-license-here'
-
Create a workload cluster in one of the following ways:
-
GitOps: See Manage separate workload clusters with GitOps
-
Terraform: See Manage separate workload clusters with Terraform
NOTE: snowDatacenterConfig.spec.identityRef
and a Snow bootstrap credentials secret need to be specified when provisioning a cluster through GitOps
or Terraform
, as EKS Anywhere Cluster Controller will not create a Snow bootstrap credentials secret like eksctl CLI
does when field is empty.
snowMachineConfig.spec.sshKeyName
must be specified to SSH into your nodes when provisioning a cluster through GitOps
or Terraform
, as the EKS Anywhere Cluster Controller will not generate the keys like eksctl CLI
does when the field is empty.
-
eksctl CLI: To create a workload cluster with eksctl
, run:
eksctl anywhere create cluster \
-f eksa-w01-cluster.yaml \
--kubeconfig mgmt/mgmt-eks-a-cluster.kubeconfig
As noted earlier, adding the --kubeconfig
option tells eksctl
to use the management cluster identified by that kubeconfig file to create a different workload cluster.
-
Check the workload cluster:
You can now use the workload cluster as you would any Kubernetes cluster.
-
If your workload cluster was created with eksctl
,
change your credentials to point to the new workload cluster (for example, w01
), then run the test application with:
export CLUSTER_NAME=w01
export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
-
If your workload cluster was created with GitOps or Terraform, the kubeconfig for your new cluster is stored as a secret on the management cluster.
You can get credentials and run the test application as follows:
kubectl get secret -n eksa-system w01-kubeconfig -o jsonpath=‘{.data.value}' | base64 —decode > w01.kubeconfig
export KUBECONFIG=w01.kubeconfig
kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
Verify the test application in the deploy test application section.
-
Add more workload clusters:
To add more workload clusters, go through the same steps for creating the initial workload, copying the config file to a new name (such as eksa-w02-cluster.yaml
), modifying resource names, and running the create cluster command again.
Next steps:
-
See the Cluster management
section for more information on common operational tasks like deleting the cluster.
-
See the Package management
section for more information on post-creation curated packages installation.
2.3.5 - Create vSphere production cluster
Create a production-quality cluster on VMware vSphere
EKS Anywhere supports a VMware vSphere provider for production grade EKS Anywhere deployments.
This document walks you through setting up EKS Anywhere on vSphere in a way that:
- Deploys an initial cluster on your vSphere environment. That cluster can be used as a self-managed cluster (to run workloads) or a management cluster (to create and manage other clusters)
- Deploys zero or more workload clusters from the management cluster
If your initial cluster is a management cluster, it is intended to stay in place so you can use it later to modify, upgrade, and delete workload clusters.
Using a management cluster makes it faster to provision and delete workload clusters.
Also it lets you keep vSphere credentials for a set of clusters in one place: on the management cluster.
The alternative is to simply use your initial cluster to run workloads.
See Cluster topologies
for details.
Important
Creating an EKS Anywhere management cluster is the recommended model.
Separating management features into a separate, persistent management cluster
provides a cleaner model for managing the lifecycle of workload clusters (to create, upgrade, and delete clusters), while workload clusters run user applications.
This approach also reduces provider permissions for workload clusters.
Prerequisite Checklist
EKS Anywhere needs to:
Also, see the Ports and protocols
page for information on ports that need to be accessible from control plane, worker, and Admin machines.
Steps
The following steps are divided into two sections:
- Create an initial cluster (used as a management or self-managed cluster)
- Create zero or more workload clusters from the management cluster
Create an initial cluster
Follow these steps to create an EKS Anywhere cluster that can be used either as a management cluster or as a self-managed cluster (for running workloads itself).
-
Generate an initial cluster config (named mgmt
for this example):
CLUSTER_NAME=mgmt
eksctl anywhere generate clusterconfig $CLUSTER_NAME \
--provider vsphere > eksa-mgmt-cluster.yaml
-
Modify the initial cluster config (eksa-mgmt-cluster.yaml
) as follows:
- Refer to vsphere configuration
for information on configuring this cluster config for a vSphere provider.
- Add Optional
configuration settings as needed.
See Github provider
to see how to identify your Git information.
- Create at least two control plane nodes, three worker nodes, and three etcd nodes for a production cluster, to provide high availability and rolling upgrades.
-
Set Credential Environment Variables
Before you create the initial cluster, you will need to set and export these environment variables for your vSphere user name and password.
Make sure you use single quotes around the values so that your shell does not interpret the values:
export EKSA_VSPHERE_USERNAME='billy'
export EKSA_VSPHERE_PASSWORD='t0p$ecret'
Note
If you have a username in the form of domain_name/user_name
, you must specify it as user_name@domain_name
to
avoid errors in cluster creation. For example, vsphere.local/admin
should be specified as admin@vsphere.local
.
-
Set License Environment Variable
Add a license to any cluster for which you want to receive paid support. If you are creating a licensed cluster, set and export the license variable (see License cluster
if you are licensing an existing cluster):
export EKSA_LICENSE='my-license-here'
After you have created your eksa-mgmt-cluster.yaml
and set your credential environment variables, you will be ready to create the cluster.
-
Configure Curated Packages
The Amazon EKS Anywhere Curated Packages are only available to customers with the Amazon EKS Anywhere Enterprise Subscription. To request a free trial, talk to your Amazon representative or connect with one here
. Cluster creation will succeed if authentication is not set up, but some warnings may be genered. Detailed package configurations can be found here
.
If you are going to use packages, set up authentication. These credentials should have limited capabilities
:
export EKSA_AWS_ACCESS_KEY_ID="your*access*id"
export EKSA_AWS_SECRET_ACCESS_KEY="your*secret*key"
export EKSA_AWS_REGION="us-west-2"
-
Create cluster
eksctl anywhere create cluster \
# --install-packages packages.yaml \ # uncomment to install curated packages at cluster creation
-f eksa-mgmt-cluster.yaml
-
Once the cluster is created you can use it with the generated KUBECONFIG
file in your local directory:
export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
-
Check the cluster nodes:
To check that the cluster completed, list the machines to see the control plane, etcd, and worker nodes:
Example command output
NAMESPACE NAME PROVIDERID PHASE VERSION
eksa-system mgmt-b2xyz vsphere:/xxxxx Running v1.24.2-eks-1-24-5
eksa-system mgmt-etcd-r9b42 vsphere:/xxxxx Running
eksa-system mgmt-md-8-6xr-rnr vsphere:/xxxxx Running v1.24.2-eks-1-24-5
...
The etcd machine doesn’t show the Kubernetes version because it doesn’t run the kubelet service.
-
Check the initial cluster’s CRD:
To ensure you are looking at the initial cluster, list the CRD to see that the name of its management cluster is itself:
kubectl get clusters mgmt -o yaml
Example command output
...
kubernetesVersion: "1.25"
managementCluster:
name: mgmt
workerNodeGroupConfigurations:
...
Note
The initial cluster is now ready to deploy workload clusters.
However, if you just want to use it to run workloads, you can deploy pod workloads directly on the initial cluster without deploying a separate workload cluster and skip the section on running separate workload clusters.
To make sure the cluster is ready to run workloads, run the test application in the
Deploy test workload section.
Create separate workload clusters
Follow these steps if you want to use your initial cluster to create and manage separate workload clusters.
-
Generate a workload cluster config:
CLUSTER_NAME=w01
eksctl anywhere generate clusterconfig $CLUSTER_NAME \
--provider vsphere > eksa-w01-cluster.yaml
Refer to the initial config described earlier for the required and optional settings.
NOTE: Ensure workload cluster object names (Cluster
, vSphereDatacenterConfig
, vSphereMachineConfig
, etc.) are distinct from management cluster object names.
-
Be sure to set the managementCluster
field to identify the name of the management cluster.
For example, the management cluster, mgmt is defined for our workload cluster w01 as follows:
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
name: w01
spec:
managementCluster:
name: mgmt
-
Set License Environment Variable
Add a license to any cluster for which you want to receive paid support. If you are creating a licensed cluster, set and export the license variable (see License cluster
if you are licensing an existing cluster):
export EKSA_LICENSE='my-license-here'
-
Create a workload cluster in one of the following ways:
-
GitOps: See Manage separate workload clusters with GitOps
-
Terraform: See Manage separate workload clusters with Terraform
NOTE: spec.users[0].sshAuthorizedKeys
must be specified to SSH into your nodes when provisioning a cluster through GitOps
or Terraform
, as the EKS Anywhere Cluster Controller will not generate the keys like eksctl CLI
does when the field is empty.
-
eksctl CLI: To create a workload cluster with eksctl
, run:
eksctl anywhere create cluster \
-f eksa-w01-cluster.yaml \
# --install-packages packages.yaml \ # uncomment to install curated packages at cluster creation
--kubeconfig mgmt/mgmt-eks-a-cluster.kubeconfig
As noted earlier, adding the --kubeconfig
option tells eksctl
to use the management cluster identified by that kubeconfig file to create a different workload cluster.
-
To check the workload cluster, get the workload cluster credentials and run a test workload:
-
If your workload cluster was created with eksctl
,
change your credentials to point to the new workload cluster (for example, w01
), then run the test application with:
export CLUSTER_NAME=w01
export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
-
If your workload cluster was created with GitOps or Terraform, the kubeconfig for your new cluster is stored as a secret on the management cluster.
You can get credentials and run the test application as follows:
kubectl get secret -n eksa-system w01-kubeconfig -o jsonpath=‘{.data.value}' | base64 —decode > w01.kubeconfig
export KUBECONFIG=w01.kubeconfig
kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
-
Add more workload clusters:
To add more workload clusters, go through the same steps for creating the initial workload, copying the config file to a new name (such as eksa-w02-cluster.yaml
), modifying resource names, and running the create cluster command again.
Next steps:
-
See the Cluster management
section for more information on common operational tasks like scaling and deleting the cluster.
-
See the Package management
section for more information on post-creation curated packages installation.
3 - Concepts
The Concepts section will describe the components and overall architecture of EKS Anywhere.
Most of the content of this section will cover how EKS Anywhere deploys, upgrades and otherwise manages Kubernetes clusters. It will point to Kubernetes documentation for specifics on how Kubernetes itself works.
3.1 - Compare EKS Anywhere and EKS
Comparing Amazon EKS Anywhere features to Amazon EKS
Amazon EKS Anywhere is a new deployment option for Amazon EKS
that enables you to easily create and operate Kubernetes clusters on-premises.
EKS Anywhere provides an installable software package for creating and operating Kubernetes clusters on-premises
and automation tooling for cluster lifecycle support.
To learn more, see EKS Anywhere
.
Amazon Elastic Kubernetes Service (Amazon EKS) is a managed Kubernetes service that makes it easy for you to run Kubernetes on the AWS cloud.
Amazon EKS is certified Kubernetes conformant, so existing applications that run on upstream Kubernetes are compatible with Amazon EKS.
To learn more about Amazon EKS, see Amazon Elastic Kubernetes Service
.
Comparing Amazon EKS Anywhere to Amazon EKS
Feature |
Amazon EKS Anywhere |
Amazon EKS |
Control plane |
|
|
K8s control plane management |
Managed by customer |
Managed by AWS |
K8s control plane location |
Customer’s datacenter |
AWS cloud |
Cluster updates |
Manual CLI updates for control plane. Flux supported rolling updates for data plane |
Managed in-place updates for control plane and managed rolling updates for data plane. |
|
|
|
Compute |
|
|
Compute options |
CloudStack, VMware vSphere, Bare Metal servers |
Amazon EC2, AWS Fargate |
Supported node operating systems |
Bottlerocket, Ubuntu, and RHEL |
Amazon Linux 2, Windows Server, Bottlerocket, Ubuntu |
Physical hardware (servers, network equipment, storage, etc.) |
Managed by customer |
Managed by AWS |
Serverless |
Not supported |
Amazon EKS on AWS Fargate |
|
|
|
Management |
|
|
Command line interface (CLI) |
eksctl (OSS command line tool) |
eksctl (OSS command line tool) |
Console view for Kubernetes objects |
Optional EKS console connection using EKS Connector (public preview) |
Native EKS console connection |
Infrastructure-as-code |
Cluster manifest, Kubernetes controllers, 3rd-party solutions
|
AWS CloudFormation, 3rd-party solutions
|
Logging and monitoring |
3rd-party solutions
|
CloudWatch, CloudTrail, 3rd-party solutions
|
GitOps |
Flux controller |
Flux controller |
|
|
|
Functions and tooling |
|
|
Networking and Security |
Cilium CNI and network policy supported |
Amazon VPC CNI supported. Calico supported for network policy. Other compatible 3rd-party CNI plugins
available. |
Load balancer |
Metallb |
Elastic Load Balancing including Application Load Balancer (ALB), and Network Load Balancer (NLB) |
Service mesh |
Community or 3rd-party solutions
|
AWS App Mesh, community, or 3rd-party solutions
|
Community tools and Helm |
Works with compatible community tooling and helm charts. |
Works with compatible community tooling and helm charts. |
|
|
|
Pricing and support |
|
|
Control plane pricing |
Free to download, paid support subscription option |
Hourly pricing per cluster |
AWS Support |
Additional annual subscription (per cluster) for AWS support |
Basic support included. Included in paid AWS support plans (developer, business, and enterprise) |
|
|
|
Comparing Amazon EKS Anywhere to Amazon EKS on Outposts
Like EKS Anywhere, Amazon EKS on Outposts provides a means of running Kubernetes clusters using EKS software on-premises.
The main differences are that:
- Amazon provides the hardware with Outposts, while most EKS Anywhere providers leverage the customer’s own hardware.
- With Amazon EKS on Outposts, the Kubernetes control plane is fully managed by AWS. With EKS Anywhere, customers are responsible for managing the lifecycle of the Kubernetes control plane with EKS Anywhere automation tooling.
- Customers can use Amazon EKS on Outposts with the same console, APIs, and tools they use to run Amazon EKS clusters in AWS Cloud. With EKS Anywhere, customers can use the eksctl CLI to manage their clusters, optionally connect their clusters to the EKS console for observability, and optionally use infrastructure as code tools such as Terraform and GitOps to manage their clusters. However, the primary interfaces for EKS Anywhere are the EKS Anywhere Custom Resources. Amazon EKS does not have a CRD-based interface today.
- Amazon EKS on Outposts is a regional AWS service that requires a consistent, reliable connection from the Outpost to the AWS Region.
EKS Anywhere is a standalone software offering that can run entirely disconnected from AWS Cloud, including air-gapped environments.
Outposts have two deployment methods available:
-
Extended clusters: With extended clusters, the Kubernetes control plane runs in an AWS Region, while Kubernetes nodes run on Outpost hardware.
-
Local clusters: With local clusters, both the Kubernetes control plane and nodes run on Outpost hardware.
For more information, see Amazon EKS on AWS Outposts
.
3.2 - Cluster creation workflow
Explanation of the process of creating an EKS Anywhere cluster
Each EKS Anywhere cluster is built from a cluster specification file, with the structure of the configuration file based on the target provider for the cluster.
Currently, Bare Metal, CloudStack, and VMware vSphere are the recommended providers for supported EKS Anywhere clusters.
We step through the cluster creation workflow for those providers here.
Management and workload clusters
EKS Anywhere offers two cluster deployment topology options:
-
Standalone cluster: If want only a single EKS Anywhere cluster, you can deploy a self-managed, standalone cluster.
This type of cluster contains all Cluster API (CAPI) management components needed to manage itself, including managing its own upgrades.
It can also run workloads.
-
Management cluster with workload clusters: If you plan to deploy multiple clusters, the project recommends you first deploy a management cluster.
The management cluster can then be used to deploy, upgrade, delete, and otherwise manage a fleet of workload clusters.
For further details about the different cluster topologies, see Cluster topologies
Before cluster creation
Some assets need to be in place before you can create an EKS Anywhere cluster.
You need to have an Administrative machine that includes the tools required to create the cluster.
Next, you need get the software tools and artifacts used to build the cluster.
Then you also need to prepare the provider, such as a vCenter environment or a set of Bare Metal machines, on which to create the resulting cluster.
Administrative machine
The Administrative machine is needed to provide:
- A place to run the commands to create and manage the target cluster.
- A Docker container runtime to run a temporary, local bootstrap cluster that creates the resulting target cluster on the vSphere provider.
- A place to hold the
kubeconfig
file needed to perform administrative actions using kubectl
.
(The kubeconfig
file is stored in the root of the folder created during cluster creation.)
See the Install EKS Anywhere
guide for Administrative machine requirements.
EKS Anywhere software
To obtain EKS Anywhere software, you need Internet access to the repositories holding that software.
EKS Anywhere software is divided into two types of components:
The CLI interface for managing clusters and the cluster components and controllers used to run workloads and configure clusters.
The software you need to obtain includes:
- Command line tools: Binaries to install on the administrative machine
, include
eksctl
, eksctl-anywhere
, kubectl
, and aws-iam-authenticator
.
- Cluster components and controllers: These components are listed on the artifacts
page for each provider.
If you are operating behind a firewall that limits access to the Internet, you can configure EKS Anywhere to identify the location of the proxy service
you choose to connect to the Internet.
For more information on the software used in EKS Distro, which includes the Kubernetes release and related software in EKS Anywhere, see the EKS Distro Releases
GitHub page.
Providers
EKS Anywhere uses an infrastructure provider model for creating, upgrading, and managing Kubernetes clusters that leverages the Kubernetes Cluster API
project.
Like Cluster API, EKS Anywhere runs a kind
cluster on the local Administrative machine to act as a bootstrap cluster.
However, instead of using CAPI directly with the clusterctl
command to manage the workload cluster, you use the eksctl anywhere
command which abstracts that process for you, including calling clusterctl
under the covers.
With your Administrative machine in place, you need to prepare your Bare Metal
, vSphere
, CloudStack
, or Snow
provider for EKS Anywhere.
The following sections describe how to create a Bare Metal or vSphere cluster.
The following diagram illustrates what happens when you start the cluster creation process for a Bare Metal provider, as described in the Bare Metal Getting started
guide.

Identify the provider (--provider tinkerbell
) and the cluster name to the eksctl anywhere create clusterconfig
command and direct the output into a cluster config .yaml
file.
2. Modify the config file and hardware CSV file
Modify the generated cluster config file to suit your situation.
Details about this config file are contained on the Bare Metal Config
page.
Create a hardware configuration file (hardware.csv
) as described in Prepare hardware inventory
.
3. Launch the cluster creation
Run the eksctl anywhere cluster create
command, providing the cluster config and hardware CSV files.
To see details on the cluster creation process, increase verbosity (-v=9
provides maximum verbosity).
4. Create bootstrap cluster and provision hardware
The cluster creation process starts by creating a temporary Kubernetes bootstrap cluster on the Administrative machine.
Containerized components of the Tinkerbell provisioner run either as pods on the bootstrap cluster (Hegel, Rufio, and Tink) or directly as containers on Docker (Boots).
Those Tinkerbell components drive the provisioning of the operating systems and Kubernetes components on each of the physical computers.
With the information gathered from the cluster specification and the hardware CSV file, three custom resource definitions (CRDs) are created.
These include:
- Hardware custom resources: Which store hardware information for each machine
- Template custom resources: Which store the tasks and actions
- Workflow custom resources: Which put together the complete hardware and template information for each machine. There are different workflows for control plane and worker nodes.
As the bootstrap cluster comes up and Tinkerbell components are started, you should see messages like the following:
$ eksctl anywhere create cluster --hardware-csv hardware.csv -f eksa-mgmt-cluster.yaml
Performing setup and validations
Tinkerbell Provider setup is valid
Validate certificate for registry mirror
Create preflight validations pass
Creating new bootstrap cluster
Provider specific pre-capi-install-setup on bootstrap cluster
Installing cluster-api providers on bootstrap cluster
Provider specific post-setup
Creating new workload cluster
At this point, Tinkerbell will try to boot up the machines in the target cluster.
Continuing cluster creation
Tinkerbell takes over the activities for creating provisioning the Bare Metal machines to become the new target cluster.
See Overview of Tinkerbell in EKS Anywhere
for examples of commands you can run to watch over this process.

- Rufio uses BMC information to set the power state for the first control plane node it wants to provision.
- When the node boots from its NIC, it talks to the Boots DHCP server, which fetches the kernel and initramfs (HookOS) needed to network boot the machine.
- With HookOS running on the node, the operating system identified by
IMG_URL
in the cluster specification is copied to the identified DEST_DISK
on the machine.
- The Hegel components provides data stores that contain information used by services such as cloud-init to configure each system.
- Next, the workflow is run on the first control plane node, followed by network booting and running the workflow for each subsequent control plane node.
- Once the control plane is up, worker nodes are network booted and workflows are run to deploy each node.
2. Tinkerbell components move to the target cluster
Once all the defined nodes are added to the cluster, the Tinkerbell components and associated data are moved to run as pods on worker nodes in the new workload cluster.
Deleting Tinkerbell from Admin machine
All Tinkerbell-related pods and containers are then deleted from the Admin machine.
Further management of tinkerbell and related information can be done using from the new cluster, using tools such as kubectl
.

Creating a vSphere cluster
The following diagram illustrates what happens when you start the cluster creation process, as described in the vSphere Getting started
guide.
Start creating a vSphere cluster

1. Generate a config file for vSphere
To this command, you identify the name of the provider (-p vsphere
) and a cluster name and redirect the output to a file.
The result is a config file template that you need to modify for the specific instance of your provider.
2. Modify the config file
Using the generated cluster config file, make modifications to suit your situation.
Details about this config file are contained on the vSphere Config
page.
3. Launch the cluster creation
Once you have modified the cluster configuration file, use eksctl anywhere cluster create -f $CLUSTER_NAME.yaml
starts the cluster creation process.
To see details on the cluster creation process, increase verbosity (-v=9
provides maximum verbosity).
4. Authenticate and create bootstrap cluster
After authenticating to vSphere and validating the assets there, the cluster creation process starts off creating a temporary Kubernetes bootstrap cluster on the Administrative machine.
To begin, the cluster creation process runs a series of govc
commands to check on the vSphere environment:
-
Checks that the vSphere environment is available.
-
Using the URL and credentials provided in the cluster spec files, authenticates to the vSphere provider.
-
Validates the datacenter and the datacenter network exists:
-
Validates that the identified datastore (to store your EKS Anywhere cluster) exists, that the folder holding your EKS Anywhere cluster VMs exists, and that the resource pools containing compute resources exist.
If you have multiple VSphereMachineConfig
objects in your config file, will see these validations repeated:
-
Validates the virtual machine templates to be used for the control plane and worker nodes (such as ubuntu-2004-kube-v1.20.7
):
If all validations pass, you will see this message:
✅ Vsphere Provider setup is valid
Next, the process runs the kind
command to build a single-node Kubernetes bootstrap cluster on the Administrative machine.
This includes pulling the kind node image, preparing the node, writing the configuration, starting the control-plane, installing CNI, and installing the StorageClass. You will see:
After this point the bootstrap cluster is installed, but not yet fully configured.
Continuing cluster creation
The following diagram illustrates the activities that occur next:

1. Add CAPI management
Cluster API (CAPI) management is added to the bootstrap cluster to direct the creation of the target cluster.
2. Set up cluster
Configure the control plane and worker nodes.
3. Add Cilium networking
Add Cilium as the CNI plugin to use for networking between the cluster services and pods.
4. Add storage
Add the default storage class to the cluster
5. Add CAPI to target cluster
Add the CAPI service to the target cluster in preparation for it to take over management of the cluster after the cluster creation is completed and the bootstrap cluster is deleted.
The bootstrap cluster can then begin moving the CAPI objects over to the target cluster, so it can take over the management of itself.
With the bootstrap cluster running and configured on the Administrative machine, the creation of the target cluster begins.
It uses kubectl
to apply a target cluster configuration as follows:
-
Once etcd, the control plane, and the worker nodes are ready, it applies the networking configuration to the target cluster.
-
The default storage class is installed on the target cluster.
-
CAPI providers are configured on the target cluster, in preparation for the target cluster to take over responsibilities for running the components needed to manage the itself.
-
With CAPI running on the target cluster, CAPI objects for the target cluster are moved from the bootstrap cluster to the target cluster’s CAPI service (done internally with the clusterctl
command):
-
Add Kubernetes CRDs and other addons that are specific to EKS Anywhere.
-
The cluster configuration is saved:
Once etcd, the control plane, and the worker nodes are ready, it applies the networking configuration to the workload cluster:
Installing networking on workload cluster
Next, the default storage class is installed on the workload cluster:
Installing storage class on workload cluster
After that, the CAPI providers are configured on the workload cluster, in preparation for the workload cluster to take over responsibilities for running the components needed to manage the itself.
Installing cluster-api providers on workload cluster
With CAPI running on the workload cluster, CAPI objects for the workload cluster are moved from the bootstrap cluster to the workload cluster’s CAPI service (done internally with the clusterctl
command):
Moving cluster management from bootstrap to workload cluster
At this point, the cluster creation process will add Kubernetes CRDs and other addons that are specific to EKS Anywhere.
That configuration is applied directly to the cluster:
Installing EKS-A custom components (CRD and controller) on workload cluster
Creating EKS-A CRDs instances on workload cluster
Installing GitOps Toolkit on workload cluster
If you did not specify GitOps support, starting the flux service is skipped:
GitOps field not specified, bootstrap flux skipped
The cluster configuration is saved:
Writing cluster config file
With the cluster up, and the CAPI service running on the new cluster, the bootstrap cluster is no longer needed and is deleted:

At this point, cluster creation is complete.
You can now use your target cluster as either:
- A standalone cluster (to run workloads) or
- A management cluster (to optionally create one or more workload clusters)
Creating workload clusters (optional)
As described in Create separate workload clusters
, you can use the cluster you just created as a management cluster to create and manage one or more workload clusters on the same vSphere provider as follows:
- Use
eksctl
to generate a cluster config file for the new workload cluster.
- Modify the cluster config with a new cluster name and different vSphere resources.
- Use
eksctl
to create the new workload cluster from the new cluster config file and credentials from the initial management cluster.
3.3 - Cluster topologies
Explanation of standalone vs. management/workload cluster topologies
For trying out EKS Anywhere or for times when a single cluster is needed, it is fine to create a standalone cluster and run your workloads on it.
However, if you plan to create multiple clusters for running Kubernetes workloads, we recommend you create a management cluster.
Then use that management cluster to manage a set of workload clusters.
This document describes those two different EKS Anywhere cluster topologies.
What is an EKS Anywhere management cluster?
An EKS Anywhere management cluster is a long-lived, on-premises Kubernetes cluster that can create and manage a fleet of EKS Anywhere workload clusters.
The workload clusters are where you run your applications.
The management cluster can only be created and managed by the Amazon CLI eksctl
.
The management cluster runs on your on-premises hardware and it does not require any connectivity back to AWS to function.
Customers are responsible for operating the management cluster including (but not limited to) patching, upgrading, scaling, and monitoring the cluster control plane and data plane.
What’s the difference between a management cluster and a standalone cluster?
From a technical point of view, they are the same.
Regardless of which deployment topology you choose, you always start by creating a singleton, standalone cluster that’s capable of managing itself.
This shows examples of separate, standalone clusters:

Once a standalone cluster is created, you have an option to use it to use it as a management cluster to create separate workload cluster(s) under it, hence making this cluster a long-lived management cluster.
You can only use eksctl
to create or delete the management cluster or a standalone cluster.
This shows examples of a management cluster that deploys and manages multiple workload clusters:

What’s the difference between a management cluster and a bootstrap cluster for EKS Anywhere?
A management cluster is a long-lived entity you have to actively operate.
The bootstrap cluster is a temporary, short-lived kind cluster that is created on a separate Administrative machine
to facilitate the creation of an initial standalone or management cluster.
The kind
cluster is automatically deleted by the end of the initial cluster creation.
When should I deploy a management cluster?
If you want to run three or more EKS Anywhere clusters, we recommend that you choose a management/workload cluster deployment topology because of the advantages listed in the table below.
The EKS Anywhere Curated Packages feature recommends deploying certain packages such as the container registry package or monitoring packages on the management cluster to avoid circular dependency.
|
Standalone cluster topology |
Management/workload cluster topology |
Pros |
Save hardware resources |
Isolation of secrets |
|
Reduced operational overhead of maintaining a separate management cluster |
Resource isolation between different teams. Reduced noisy-neighbor effect. |
|
|
Isolation between development and production workloads. |
|
|
Isolation between applications and fleet management services, such as monitoring server or container registry. |
|
|
Provides a central control plane and API to automate cluster lifecycles |
Cons |
Shared secrets such, as SSH credentials or VMware credentials, across all teams who share the cluster. |
Consumes extra resources. |
|
Without a central control plane (such as a parent management cluster), it is not possible to automate cluster creation/deletion with advanced methods like GitOps or IaC. |
The creation/deletion of the management cluster itself can’t be automated through GitOps or IaC. |
|
Circular dependencies arise if the cluster has to host a monitoring server or a local container registry. |
|
|
|
|
Which EKS Anywhere features support the management/workload cluster deployment topology today?
Features |
Supported |
Create/update/delete a workload cluster on… |
|
|
Y |
|
Y |
|
Y |
- Docker via CLI (non-production only)
|
Y |
Update a workload cluster on… |
|
- VMware via GitOps/Terraform
|
Y |
- CloudStack via GitOps/Terraform
|
Y |
- Bare Metal via GitOps/Terraform
|
N |
- Docker via CLI (non-production only)
|
Y |
Create/delete a workload cluster on… |
|
- VMware via GitOps/Terraform
|
Y |
- CloudStack via GitOps/Terraform
|
N |
- Bare Metal via GitOps/Terraform
|
N |
- Docker via GitOps/Terraform (non-production only)
|
Y |
Install a curated package on the management cluster |
Y |
Install a curated package on the workload cluster from the management cluster |
Y |
Install a curated package on the management cluster during a workload cluster creation |
N |
3.4 - EKS Anywhere curated packages
All information you may need for EKS Anywhere curated packages
Note
The Amazon EKS Anywhere Curated Packages are only available to customers with the Amazon EKS Anywhere Enterprise Subscription. To request a free trial, talk to your Amazon representative or connect with one
here.
Overview
Amazon EKS Anywhere Curated Packages are Amazon-curated software packages that extend the core functionalities of Kubernetes on your EKS Anywhere clusters. If you operate EKS Anywhere clusters on-premises, you probably install additional software to ensure the security and reliability of your clusters. However, you may be spending a lot of effort researching for the right software, tracking updates, and testing them for compatibility. Now with the EKS Anywhere Curated Packages, you can rely on Amazon to provide trusted, up-to-date, and compatible software that are supported by Amazon, reducing the need for multiple vendor support agreements.
- Amazon-built: All container images of the packages are built from source code by Amazon, including the open source (OSS) packages. OSS package images are built from the open source upstream.
- Amazon-scanned: Amazon scans the container images including the OSS package images daily for security vulnerabilities and provides remediation.
- Amazon-signed: Amazon signs the package bundle manifest (a Kubernetes manifest) for the list of curated packages. The manifest is signed with AWS Key Management Service (AWS KMS) managed private keys. The curated packages are installed and managed by a package controller on the clusters. Amazon provides validation of signatures through an admission control webhook in the package controller and the public keys distributed in the bundle manifest file.
- Amazon-tested: Amazon tests the compatibility of all curated packages including the OSS packages with each new version of EKS Anywhere.
- Amazon-supported: All curated packages including the curated OSS packages are supported under the EKS Anywhere Support Subscription.
The main components of EKS Anywhere Curated Packages are the package controller
, the package build artifacts
and the command line interface
. The package controller will run in a pod in an EKS Anywhere cluster. The package controller will manage the lifecycle of all curated packages.
Curated packages
Please check out curated package list
for the complete list of EKS Anywhere curated packages.
Workshop
Please check out workshop
for curated packages.
FAQ
-
Can I install software not from the curated package list?
Yes. You can install any optional software of your choice. Be aware you cannot use EKS Anywhere tooling to install or update your self-managed software. Amazon does not provide testing, security patching, software updates, or customer support for your self-managed software.
-
Can I install software that’s on the curated package list but not sourced from EKS Anywhere repository?
If, for example, you deploy a Harbor image that is not built and signed by Amazon, Amazon will not provide testing or customer support to your self-built images.
3.4.1 - EKS Anywhere curated package controller
Overview
The package controller will install, upgrade, configure and remove packages from the cluster. The package controller will watch the packages and packagebundle custom resources for the packages to run and their configuration values. The package controller only runs on the management cluster and manages packages on the management cluster and on the workload clusters.
Package release information is stored in a package bundle manifest. The package controller will continually monitor and download new package bundles. When a new package bundle is downloaded, it will show up as update available and users can use the CLI to activate the bundle to upgrade the installed packages.
Any changes to a package custom resource will trigger and install, upgrade, configuration or removal of that package. The package controller will use ECR or private registry to get all resources including bundle, helm charts, and container images.
Installation
Please check out create local cluster
and create production cluster
for how to install package controller at the cluster creation time.
Please check out package management
for how to install package controller after cluster creation and manage curated packages.
3.4.2 - EKS Anywhere curated package build artifacts
There are three types of build artifacts for packages: the container images, the helm charts and the package bundle manifests. The container images, helm charts and bundle manifests for all of the packages will be built and stored in EKS Anywhere ECR repository. Each package may have multiple versions specified in the packages bundle. The bundle will reference the helm chart tag in the ECR repository. The helm chart will reference the container images for the package.
3.4.3 - EKS Anywhere curated package CLI
Overview
The Curated Packages CLI provides the user experience required to manage curated packages.
Through the CLI, a user is able to discover, create, delete, and upgrade curated packages to a cluster.
These functionalities can be achieved during and after an EKS Anywhere cluster is created.
The CLI provides both imperative and declarative mechanisms to manage curated packages. These
packages will be included as part of a packagebundle
that will be provided by the EKS Anywhere team.
Whenever a user requests a package creation through the CLI (eksctl anywhere create package
), a custom resource is created on the cluster
indicating the existence of a new package that needs to be installed. When a user executes a delete operation (eksctl anywhere delete package
),
the custom resource will be removed from the cluster indicating the need for uninstalling a package.
An upgrade through the CLI (eksctl anywhere upgrade packages
) upgrades all packages to the latest release.
Installation
Please check out Install EKS Anywhere
to install the eksctl anywhere
CLI on your machine.
Also check out Create local cluster
and Create production cluster
for how to use the CLI during and after cluster creation.
Check out EKS Anywhere curated package management
for how to use the CLI after a cluster is created and manage curated packages.
4 - Tasks
Common actions and set-up you may need for EKS Anywhere
4.1 - Workload management
Common tasks for managing workloads.
4.1.1 - Deploy test workload
How to deploy a workload to check that your cluster is working properly
We’ve created a simple test application for you to verify your cluster is working properly.
You can deploy it with the following command:
kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
To see the new pod running in your cluster, type:
kubectl get pods -l app=hello-eks-a
Example output:
NAME READY STATUS RESTARTS AGE
hello-eks-a-745bfcd586-6zx6b 1/1 Running 0 22m
To check the logs of the container to make sure it started successfully, type:
kubectl logs -l app=hello-eks-a
There is also a default web page being served from the container.
You can forward the deployment port to your local machine with
kubectl port-forward deploy/hello-eks-a 8000:80
Now you should be able to open your browser or use curl
to http://localhost:8000
to view the page example application.
Example output:
⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢
Thank you for using
███████╗██╗ ██╗███████╗
██╔════╝██║ ██╔╝██╔════╝
█████╗ █████╔╝ ███████╗
██╔══╝ ██╔═██╗ ╚════██║
███████╗██║ ██╗███████║
╚══════╝╚═╝ ╚═╝╚══════╝
█████╗ ███╗ ██╗██╗ ██╗██╗ ██╗██╗ ██╗███████╗██████╗ ███████╗
██╔══██╗████╗ ██║╚██╗ ██╔╝██║ ██║██║ ██║██╔════╝██╔══██╗██╔════╝
███████║██╔██╗ ██║ ╚████╔╝ ██║ █╗ ██║███████║█████╗ ██████╔╝█████╗
██╔══██║██║╚██╗██║ ╚██╔╝ ██║███╗██║██╔══██║██╔══╝ ██╔══██╗██╔══╝
██║ ██║██║ ╚████║ ██║ ╚███╔███╔╝██║ ██║███████╗██║ ██║███████╗
╚═╝ ╚═╝╚═╝ ╚═══╝ ╚═╝ ╚══╝╚══╝ ╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝╚══════╝
You have successfully deployed the hello-eks-a pod hello-eks-a-c5b9bc9d8-qp6bg
For more information check out
https://anywhere.eks.amazonaws.com
⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢
If you would like to expose your applications with an external load balancer or an ingress controller, you can follow the steps in Adding an external load balancer
.
4.1.2 - Add an external load balancer
How to deploy a load balancer controller to expose a workload running in EKS Anywhere
While you are free to use any load balancer you like with your EKS Anywhere cluster, AWS currently only supports the MetalLB load balancer.
For information on how to configure a MetalLB curated package for EKS Anywhere, see the Add MetalLB
page.
4.1.3 - Add an ingress controller
How to deploy an ingress controller for simple host or URL-based HTTP routing into workload running in EKS-A
While you are free to use any Ingress Controller you like with your EKS Anywhere cluster, AWS currently only supports Emissary Ingress.
For information on how to configure a Emissary Ingress curated package for EKS Anywhere, see the Add Emissary Ingress
page.
Setting up Emissary-ingress for Ingress Controller
-
Deploy the Hello EKS Anywhere
test application.
kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
-
Set up a load balancer: Set up MetalLB Load Balancer by following the instructions here
-
Install Emissary Ingress: Follow the instructions here Add Emissary Ingress
-
Create Emissary Listeners on your cluster (This is a one time setup).
kubectl apply -f - <<EOF
---
apiVersion: getambassador.io/v3alpha1
kind: Listener
metadata:
name: http-listener
namespace: default
spec:
port: 8080
protocol: HTTPS
securityModel: XFP
hostBinding:
namespace:
from: ALL
---
apiVersion: getambassador.io/v3alpha1
kind: Listener
metadata:
name: https-listener
namespace: default
spec:
port: 8443
protocol: HTTPS
securityModel: XFP
hostBinding:
namespace:
from: ALL
EOF
-
Create a Mapping on your cluster. This Mapping tells Emissary-ingress to route all traffic inbound to the /backend/ path to the Hello EKS Anywhere Service. This hostname IP is the IP found from the LoadBalancer resource deployed by MetalLB for you.
kubectl apply -f - <<EOF
---
apiVersion: getambassador.io/v2
kind: Mapping
metadata:
name: hello-backend
spec:
prefix: /backend/
service: hello-eks-a
hostname: "195.16.99.65"
EOF
-
Store the Emissary-ingress load balancer IP address to a local environment variable. You will use this variable to test accessing your service.
export EMISSARY_LB_ENDPOINT=$(kubectl get svc ambassador -o "go-template={{range .status.loadBalancer.ingress}}{{or .ip .hostname}}{{end}}")
-
Test the configuration by accessing the service through the Emissary-ingress load balancer.
curl -Lk http://$EMISSARY_LB_ENDPOINT/backend/
NOTE: URL base path will need to match what is specified in the prefix exactly, including the trailing ‘/’
You should see something like this in the output
⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢
Thank you for using
███████╗██╗ ██╗███████╗
██╔════╝██║ ██╔╝██╔════╝
█████╗ █████╔╝ ███████╗
██╔══╝ ██╔═██╗ ╚════██║
███████╗██║ ██╗███████║
╚══════╝╚═╝ ╚═╝╚══════╝
█████╗ ███╗ ██╗██╗ ██╗██╗ ██╗██╗ ██╗███████╗██████╗ ███████╗
██╔══██╗████╗ ██║╚██╗ ██╔╝██║ ██║██║ ██║██╔════╝██╔══██╗██╔════╝
███████║██╔██╗ ██║ ╚████╔╝ ██║ █╗ ██║███████║█████╗ ██████╔╝█████╗
██╔══██║██║╚██╗██║ ╚██╔╝ ██║███╗██║██╔══██║██╔══╝ ██╔══██╗██╔══╝
██║ ██║██║ ╚████║ ██║ ╚███╔███╔╝██║ ██║███████╗██║ ██║███████╗
╚═╝ ╚═╝╚═╝ ╚═══╝ ╚═╝ ╚══╝╚══╝ ╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝╚══════╝
You have successfully deployed the hello-eks-a pod hello-eks-a-c5b9bc9d8-fx2fr
For more information check out
https://anywhere.eks.amazonaws.com
⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢
4.1.4 - Secure connectivity with CNI and Network Policy
How to validate the setup of Cilium CNI and deploy network policies to secure workload connectivity.
EKS Anywhere uses Cilium
for pod networking and security.
Cilium is installed by default as a Kubernetes CNI plugin and so is already running in your EKS Anywhere cluster.
This section provides information about:
-
Understanding Cilium components and requirements
-
Validating your Cilium networking setup.
-
Using Cilium to securing workload connectivity using Kubernetes Network Policy.
Cilium Components
The primary Cilium Agent runs as a DaemonSet on each Kubernetes node. Each cluster also includes a Cilium Operator Deployment to handle certain cluster-wide operations. For EKS Anywhere, Cilium is configured to use the Kubernetes API server as the identity store, so no etcd cluster connectivity is required.
In a properly working environment, each Kubernetes node should have a Cilium Agent pod (cilium-WXYZ
) in “Running” and ready (1/1) state.
By default there will be two
Cilium Operator pods (cilium-operator-123456-WXYZ
) in “Running” and ready (1/1) state on different Kubernetes nodes for high-availability.
Run the following command to ensure all cilium related pods are in a healthy state.
kubectl get pods -n kube-system | grep cilium
Example output for this command in a 3 node environment is:
kube-system cilium-fsjmd 1/1 Running 0 4m
kube-system cilium-nqpkv 1/1 Running 0 4m
kube-system cilium-operator-58ff67b8cd-jd7rf 1/1 Running 0 4m
kube-system cilium-operator-58ff67b8cd-kn6ss 1/1 Running 0 4m
kube-system cilium-zz4mt 1/1 Running 0 4m
Network Connectivity Requirements
To provide pod connectivity within an on-premises environment, the Cilium agent implements an overlay network using the GENEVE tunneling protocol. As a result,
UDP port 6081 connectivity MUST be allowed by any firewall running between Kubernetes nodes running the Cilium agent.
Allowing ICMP Ping (type = 8, code = 0) as well as TCP port 4240 is also recommended in order for Cilium Agents to validate node-to-node connectivity as
part of internal status reporting.
Validating Connectivity
Cilium includes a connectivity check YAML that can be deployed into a test namespace in order to validate proper installation and connectivity within a Kubernetes cluster. If the connectivity check passes, all pods created by the YAML manifest will reach “Running” and ready (1/1) state. We recommend running this test only once you have multiple worker nodes in your environment to ensure you are validating cross-node connectivity.
It is important that this test is run in a dedicated namespace, with no existing network policy. For example:
kubectl create ns cilium-test
kubectl apply -n cilium-test -f https://docs.isovalent.com/v1.10/public/connectivity-check-eksa.yaml
Once all pods have started, simply checking the status of pods in this namespace will indicate whether the tests have passed:
kubectl get pods -n cilium-test
Successful test output will show all pods in a “Running” and ready (1/1) state:
NAME READY STATUS RESTARTS AGE
echo-a-d576c5f8b-zlfsk 1/1 Running 0 59s
echo-b-787dc99778-sxlcc 1/1 Running 0 59s
echo-b-host-675cd8cfff-qvvv8 1/1 Running 0 59s
host-to-b-multi-node-clusterip-6fd884bcf7-pvj5d 1/1 Running 0 58s
host-to-b-multi-node-headless-79f7df47b9-8mzbp 1/1 Running 0 58s
pod-to-a-57695cc7ff-6tqpv 1/1 Running 0 59s
pod-to-a-allowed-cnp-7b6d5ff99f-4rhrs 1/1 Running 0 59s
pod-to-a-denied-cnp-6887b57579-zbs2t 1/1 Running 0 59s
pod-to-b-intra-node-hostport-7d656d7bb9-6zjrl 1/1 Running 0 57s
pod-to-b-intra-node-nodeport-569d7c647-76gn5 1/1 Running 0 58s
pod-to-b-multi-node-clusterip-fdf45bbbc-8l4zz 1/1 Running 0 59s
pod-to-b-multi-node-headless-64b6cbdd49-9hcqg 1/1 Running 0 59s
pod-to-b-multi-node-hostport-57fc8854f5-9d8m8 1/1 Running 0 58s
pod-to-b-multi-node-nodeport-54446bdbb9-5xhfd 1/1 Running 0 58s
pod-to-external-1111-56548587dc-rmj9f 1/1 Running 0 59s
pod-to-external-fqdn-allow-google-cnp-5ff4986c89-z4h9j 1/1 Running 0 59s
Afterward, simply delete the namespace to clean-up the connectivity test:
kubectl delete ns cilium-test
Kubernetes Network Policy
By default, all Kubernetes workloads within a cluster can talk to any other workloads in the cluster, as well as any workloads outside the cluster. To enable a stronger security posture, Cilium implements the Kubernetes Network Policy specification to provide identity-aware firewalling / segmentation of Kubernetes workloads.
Network policies are defined as Kubernetes YAML specifications that are applied to a particular namespaces to describe that connections should be allowed to or from a given set of pods. These network policies are “identity-aware” in that they describe workloads within the cluster using Kubernetes metadata like namespace and labels, rather than by IP Address.
Basic network policies are validated as part of the above Cilium connectivity check test.
For next steps on leveraging Network Policy, we encourage you to explore:
Additional Cilium Features
Many advanced features of Cilium are not yet enabled as part of EKS Anywhere, including: Hubble observability, DNS-aware and HTTP-Aware Network Policy, Multi-cluster Routing, Transparent Encryption, and Advanced Load-balancing.
Please contact the EKS Anywhere team if you are interested in leveraging these advanced features along with EKS Anywhere.
4.2 - Cluster management
Common tasks for managing clusters.
4.2.1 - Cluster management overview
Overview of tools and interfaces for managing EKS Anywhere clusters
The content in this page will describe the tools and interfaces available to an administrator after an EKS Anywhere cluster is up and running.
It will also describe which administrative actions done:
- Directly in Kubernetes itself (such as adding nodes with
kubectl
)
- Through the EKS Anywhere API (such as deleting a cluster with
eksctl
).
- Through tools which interface with the Kubernetes API (such as managing a cluster with
terraform
)
Note that direct changes to OVAs before nodes are deployed is not yet supported.
However, we are working on a solution for that issue.
4.2.2 - Scale cluster
How to scale your cluster
4.2.2.1 - Scale Bare Metal cluster
How to scale your Bare Metal cluster
When you are horizontally scaling your Bare Metal EKS Anywhere cluster, consider the number of nodes you need for your control plane and for your data plane.
See the Kubernetes Components
documentation to learn the differences between the control plane and the data plane (worker nodes).
Horizontally scaling the cluster is done by increasing the number for the control plane or worker node groups under the Cluster specification.
NOTE: If etcd is running on your control plane (the default configuration) you should scale your control plane in odd numbers (3, 5, 7…).
apiVersion: anywhere.eks.amazonaws.com/v1
kind: Cluster
metadata:
name: test-cluster
spec:
controlPlaneConfiguration:
count: 1 # increase this number to horizontally scale your control plane
...
workerNodeGroupsConfiguration:
- count: 1 # increase this number to horizontally scale your data plane
Next, you must ensure you have enough available hardware for the scale-up operation to function. Available hardware could have been fed to the cluster as extra hardware during a prior create command, or could be fed to the cluster during the scale-up process by providing the hardware CSV file to the upgrade cluster command (explained in detail below).
For scale-down operation, you can skip directly to the upgrade cluster command
.
To check if you have enough available hardware for scale up, you can use the kubectl
command below to check if there are hardware with the selector labels corresponding to the controlplane/worker node group and without the ownerName
label.
kubectl get hardware -n eksa-system --show-labels
For example, if you want to scale a worker node group with selector label type=worker-group-1
, then you must have an additional hardware object in your cluster with the label type=worker-group-1
that doesn’t have the ownerName
label.
In the command shown below, eksa-worker2
matches the selector label and it doesn’t have the ownerName
label. Thus, it can be used to scale up worker-group-1
by 1.
kubectl get hardware -n eksa-system --show-labels
NAME STATE LABELS
eksa-controlplane type=controlplane,v1alpha1.tinkerbell.org/ownerName=abhnvp-control-plane-template-1656427179688-9rm5f,v1alpha1.tinkerbell.org/ownerNamespace=eksa-system
eksa-worker1 type=worker-group-1,v1alpha1.tinkerbell.org/ownerName=abhnvp-md-0-1656427179689-9fqnx,v1alpha1.tinkerbell.org/ownerNamespace=eksa-system
eksa-worker2 type=worker-group-1
If you don’t have any available hardware that match this requirement in the cluster, you can setup a new hardware CSV
. You can feed this hardware inventory file during the upgrade cluster command
.
Upgrade Cluster Command for Scale Up/Down
With Hardware CSV File
eksctl anywhere upgrade cluster -f cluster.yaml --hardware-csv <hardware.csv>
Without Hardware CSV File
eksctl anywhere upgrade cluster -f cluster.yaml
Autoscaling
EKS Anywhere supports autoscaling of worker node groups using the Kubernetes Cluster Autoscaler
and as a curated package
.
See here
for details on how to configure your cluster spec to autoscale worker node groups for autoscaling.
4.2.2.2 - Scale CloudStack cluster
How to scale your CloudStack cluster
Autoscaling
EKS Anywhere supports autoscaling of worker node groups using the Kubernetes Cluster Autoscaler
and as a curated package
.
See here
for details on how to configure your cluster spec to autoscale worker node groups for autoscaling.
4.2.2.3 - Scale Nutanix cluster
How to scale your Nutanix cluster
When you are scaling your Nutanix EKS Anywhere cluster, consider the number of nodes you need for your control plane and for your data plane.
Each plane can be scaled horizontally (add more nodes) or vertically (provide nodes with more resources).
In each case you can scale the cluster manually, semi-automatically, or automatically.
See the Kubernetes Components
documentation to learn the differences between the control plane and the data plane (worker nodes).
Manual cluster scaling
Horizontally scaling the cluster is done by increasing the number for the control plane or worker node groups under the Cluster specification.
NOTE: If etcd is running on your control plane (the default configuration) you should scale your control plane in odd numbers (3, 5, 7…).
apiVersion: anywhere.eks.amazonaws.com/v1
kind: Cluster
metadata:
name: test-cluster
spec:
controlPlaneConfiguration:
count: 1 # increase this number to horizontally scale your control plane
...
workerNodeGroupsConfiguration:
- count: 1 # increase this number to horizontally scale your data plane
Vertically scaling your cluster is done by updating the machine config spec for your infrastructure provider.
For a Nutanix cluster an example is
NOTE: Not all providers can be vertically scaled (e.g. bare metal)
apiVersion: anywhere.eks.amazonaws.com/v1
kind: NutanixMachineConfig
metadata:
name: test-machine
namespace: default
spec:
systemDiskSize: 50 # increase this number to make the VM disk larger
vcpuSockets: 8 # increase this number to add vCPUs to your VM
memorySize: 8192 # increase this number to add memory to your VM
Once you have made configuration updates you can apply the changes to your cluster.
If you are adding or removing a node, only the terminated nodes will be affected.
If you are vertically scaling your nodes, then all nodes will be replaced one at a time.
eksctl anywhere upgrade cluster -f cluster.yaml
Semi-automatic scaling
Scaling your cluster in a semi-automatic way still requires changing your cluster manifest configuration.
In a semi-automatic mode you change your cluster spec and then have automation make the cluster changes.
You can do this by storing your cluster config manifest in git and then having a CI/CD system deploy your changes.
Or you can use a GitOps controller to apply the changes.
To read more about making changes with the integrated Flux GitOps controller you can read how to Manage a cluster with GitOps
.
Autoscaling
EKS Anywhere supports autoscaling of worker node groups using the Kubernetes Cluster Autoscaler
and as a curated package
.
See here
for details on how to configure your cluster spec to autoscale worker node groups for autoscaling.
4.2.2.4 - Scale vSphere cluster
How to scale your vSphere cluster
When you are scaling your vSphere EKS Anywhere cluster, consider the number of nodes you need for your control plane and for your data plane.
Each plane can be scaled horizontally (add more nodes) or vertically (provide nodes with more resources).
In each case you can scale the cluster manually, semi-automatically, or automatically.
See the Kubernetes Components
documentation to learn the differences between the control plane and the data plane (worker nodes).
Manual cluster scaling
Horizontally scaling the cluster is done by increasing the number for the control plane or worker node groups under the Cluster specification.
NOTE: If etcd is running on your control plane (the default configuration) you should scale your control plane in odd numbers (3, 5, 7…).
apiVersion: anywhere.eks.amazonaws.com/v1
kind: Cluster
metadata:
name: test-cluster
spec:
controlPlaneConfiguration:
count: 1 # increase this number to horizontally scale your control plane
...
workerNodeGroupsConfiguration:
- count: 1 # increase this number to horizontally scale your data plane
Vertically scaling your cluster is done by updating the machine config spec for your infrastructure provider.
For a vSphere cluster an example is
NOTE: Not all providers can be vertically scaled (e.g. bare metal)
apiVersion: anywhere.eks.amazonaws.com/v1
kind: VSphereMachineConfig
metadata:
name: test-machine
namespace: default
spec:
diskGiB: 25 # increase this number to make the VM disk larger
numCPUs: 2 # increase this number to add vCPUs to your VM
memoryMiB: 8192 # increase this number to add memory to your VM
Once you have made configuration updates you can apply the changes to your cluster.
If you are adding or removing a node, only the terminated nodes will be affected.
If you are vertically scaling your nodes, then all nodes will be replaced one at a time.
eksctl anywhere upgrade cluster -f cluster.yaml
Semi-automatic scaling
Scaling your cluster in a semi-automatic way still requires changing your cluster manifest configuration.
In a semi-automatic mode you change your cluster spec and then have automation make the cluster changes.
You can do this by storing your cluster config manifest in git and then having a CI/CD system deploy your changes.
Or you can use a GitOps controller to apply the changes.
To read more about making changes with the integrated Flux GitOps controller you can read how to Manage a cluster with GitOps
.
Autoscaling
EKS Anywhere supports autoscaling of worker node groups using the Kubernetes Cluster Autoscaler
and as a curated package
.
See here
for details on how to configure your cluster spec to autoscale worker node groups for autoscaling.
4.2.3 - Upgrade cluster
How to perform a cluster upgrade
4.2.3.1 - Upgrade Bare Metal cluster
How to perform a cluster upgrade for Bare Metal cluster
EKS Anywhere provides the command upgrade
, which allows you to upgrade
various aspects of your EKS Anywhere cluster.
When you run eksctl anywhere upgrade cluster -f ./cluster.yaml
, EKS Anywhere runs a set of preflight checks to ensure your cluster is ready to be upgraded.
EKS Anywhere then performs the upgrade, modifying your cluster to match the updated specification.
The upgrade command also upgrades core components of EKS Anywhere and lets the user enjoy the latest features, bug fixes and security patches.
NOTE: Currently only Minor Version Upgrades are support for Bare Metal clusters. No other aspects of the cluster upgrades are currently supported.
Minor Version Upgrades
Kubernetes has minor releases three times per year
and EKS Distro follows a similar cadence.
EKS Anywhere will add support for new EKS Distro releases as they are released, and you are advised to upgrade your cluster when possible.
Cluster upgrades are not handled automatically and require administrator action to modify the cluster specification and perform an upgrade.
You are advised to upgrade your clusters in development environments first and verify your workloads and controllers are compatible with the new version.
Cluster upgrades are performed using a rolling upgrade process (similar to Kubernetes Deployments).
Upgrades can only happen one minor version at a time (e.g. 1.24
-> 1.25
).
Control plane components will be upgraded before worker nodes.
Prerequisites
This type of upgrade requires you to have one spare hardware server for control plane upgrade and one for each worker node group upgrade.
The spare hardware server is provisioned with the new version and then an old server is deprovisioned. The deprovisioned server is then reprovisioned with
the new version while another old server is deprovisioned. This happens one at a time until all the control plane components have been upgraded, followed by
worker node upgrades.
Core component upgrades
EKS Anywhere upgrade
also supports upgrading the following core components:
- Core CAPI
- CAPI providers
- Cilium CNI plugin
- Cert-manager
- Etcdadm CAPI provider
- EKS Anywhere controllers and CRDs
- GitOps controllers (Flux) - this is an optional component, will be upgraded only if specified
The latest versions of these core EKS Anywhere components are embedded into a bundles manifest that the CLI uses to fetch the latest versions
and image builds needed for each component upgrade.
The command detects both component version changes and new builds of the same versioned component.
If there is a new Kubernetes version that is going to get rolled out, the core components get upgraded before the Kubernetes
version.
Irrespective of a Kubernetes version change, the upgrade command will always upgrade the internal EKS
Anywhere components mentioned above to their latest available versions. All upgrade changes are backwards compatible.
Check upgrade components
Before you perform an upgrade, check the current and new versions of components that are ready to upgrade by typing:
eksctl anywhere upgrade plan cluster -f cluster.yaml
The output should appear similar to the following:
Worker node group name not specified. Defaulting name to md-0.
Warning: The recommended number of control plane nodes is 3 or 5
Worker node group name not specified. Defaulting name to md-0.
Checking new release availability...
NAME CURRENT VERSION NEXT VERSION
EKS-A v0.0.0-dev+build.1000+9886ba8 v0.0.0-dev+build.1105+46598cb
cluster-api v1.0.2+e8c48f5 v1.0.2+1274316
kubeadm v1.0.2+92c6d7e v1.0.2+aa1a03a
vsphere v1.0.1+efb002c v1.0.1+ef26ac1
kubadm v1.0.2+f002eae v1.0.2+f443dcf
etcdadm-bootstrap v1.0.2-rc3+54dcc82 v1.0.0-rc3+df07114
etcdadm-controller v1.0.2-rc3+a817792 v1.0.0-rc3+a310516
To format the output in json, add -o json
to the end of the command line.
Check hardware availability
Next, you must ensure you have enough available hardware for the rolling upgrade operation to function. This type of upgrade requires you to have one spare hardware server for control plane upgrade and one for each worker node group upgrade. Check prerequisites
for more information.
Available hardware could have been fed to the cluster as extra hardware during a prior create command, or could be fed to the cluster during the upgrade process by providing the hardware CSV file to the upgrade cluster command
.
To check if you have enough available hardware for rolling upgrade, you can use the kubectl
command below to check if there are hardware objects with the selector labels corresponding to the controlplane/worker node group and without the ownerName
label.
kubectl get hardware -n eksa-system --show-labels
For example, if you want to perform upgrade on a cluster with one worker node group with selector label type=worker-group-1
, then you must have an additional hardware object in your cluster with the label type=controlplane
(for control plane upgrade) and one with type=worker-group-1
(for worker node group upgrade) that doesn’t have the ownerName
label.
In the command shown below, eksa-worker2
matches the selector label and it doesn’t have the ownerName
label. Thus, it can be used to perform rolling upgrade of worker-group-1
. Similarly, eksa-controlplane-spare
will be used for rolling upgrade of control plane.
kubectl get hardware -n eksa-system --show-labels
NAME STATE LABELS
eksa-controlplane type=controlplane,v1alpha1.tinkerbell.org/ownerName=abhnvp-control-plane-template-1656427179688-9rm5f,v1alpha1.tinkerbell.org/ownerNamespace=eksa-system
eksa-controlplane-spare type=controlplane
eksa-worker1 type=worker-group-1,v1alpha1.tinkerbell.org/ownerName=abhnvp-md-0-1656427179689-9fqnx,v1alpha1.tinkerbell.org/ownerNamespace=eksa-system
eksa-worker2 type=worker-group-1
If you don’t have any available hardware that match this requirement in the cluster, you can setup a new hardware CSV
. You can feed this hardware inventory file during the upgrade cluster command
.
To perform a cluster upgrade you can modify your cluster specification kubernetesVersion
field to the desired version.
As an example, to upgrade a cluster with version 1.24 to 1.25 you would change your spec as follows:
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
name: dev
spec:
controlPlaneConfiguration:
count: 1
endpoint:
host: "198.18.99.49"
machineGroupRef:
kind: TinkerbellMachineConfig
name: dev
...
kubernetesVersion: "1.25"
...
NOTE: If you have a custom machine image for your nodes in your cluster config yaml you may also need to update
your TinkerbellDatacenterConfig
with a new osImageURL
.
and then you will run the upgrade cluster command
.
Upgrade cluster command
With hardware CSV
eksctl anywhere upgrade cluster -f cluster.yaml --hardware-csv <hardware.csv>
Without hardware CSV
eksctl anywhere upgrade cluster -f cluster.yaml
This will upgrade the cluster specification (if specified), upgrade the core components to the latest available versions and apply the changes using the provisioner controllers.
Output
Example output:
✅ control plane ready
✅ worker nodes ready
✅ nodes ready
✅ cluster CRDs ready
✅ cluster object present on workload cluster
✅ upgrade cluster kubernetes version increment
✅ validate immutable fields
🎉 all cluster upgrade preflight validations passed
Performing provider setup and validations
Pausing EKS-A cluster controller reconcile
Pausing Flux kustomization
GitOps field not specified, pause flux kustomization skipped
Creating bootstrap cluster
Installing cluster-api providers on bootstrap cluster
Moving cluster management from workload to bootstrap cluster
Upgrading workload cluster
Moving cluster management from bootstrap to workload cluster
Applying new EKS-A cluster resource; resuming reconcile
Resuming EKS-A controller reconciliation
Updating Git Repo with new EKS-A cluster spec
GitOps field not specified, update git repo skipped
Forcing reconcile Git repo with latest commit
GitOps not configured, force reconcile flux git repo skipped
Resuming Flux kustomization
GitOps field not specified, resume flux kustomization skipped
During the upgrade process, EKS Anywhere pauses the cluster controller reconciliation by adding the paused annotation anywhere.eks.amazonaws.com/paused: true
to the EKS Anywhere cluster, provider datacenterconfig and machineconfig resources, before the components upgrade. After upgrade completes, the annotations are removed so that the cluster controller resumes reconciling the cluster.
Though not recommended, you can manually pause the EKS Anywhere cluster controller reconciliation to perform extended maintenance work or interact with Cluster API objects directly. To do it, you can add the paused annotation to the cluster resource:
kubectl annotate clusters.anywhere.eks.amazonaws.com ${CLUSTER_NAME} -n ${CLUSTER_NAMESPACE} anywhere.eks.amazonaws.com/paused=true
After finishing the task, make sure you resume the cluster reconciliation by removing the paused annotation, so that EKS Anywhere cluster controller can continue working as expected.
kubectl annotate clusters.anywhere.eks.amazonaws.com ${CLUSTER_NAME} -n ${CLUSTER_NAMESPACE} anywhere.eks.amazonaws.com/paused-
Upgradeable cluster attributes
Cluster
:
Advanced configuration for rolling upgrade
EKS Anywhere allows an optional configuration to customize the behavior of upgrades.
It allows the specification of
Two parameters that control the desired behavior of rolling upgrades:
- maxSurge - The maximum number of machines that can be scheduled above the desired number of machines. When not specified, the current CAPI default of 1 is used.
- maxUnavailable - The maximum number of machines that can be unavailable during the upgrade. When not specified, the current CAPI default of 0 is used.
Example configuration:
upgradeRolloutStrategy:
rollingUpgradeParams:
maxSurge: 1
maxUnavailable: 0 # only configurable for worker nodes
‘upgradeRolloutStrategy’ configuration can be specified separately for control plane and for each worker node group. This template contains an example for control plane under the ‘controlPlaneConfiguration’ section and for worker node group under ‘workerNodeGroupConfigurations’:
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
name: my-cluster-name
spec:
clusterNetwork:
cniConfig:
cilium: {}
pods:
cidrBlocks:
- 192.168.0.0/16
services:
cidrBlocks:
- 10.96.0.0/12
controlPlaneConfiguration:
count: 1
endpoint:
host: "10.61.248.209"
machineGroupRef:
kind: TinkerbellMachineConfig
name: my-cluster-name-cp
upgradeRolloutStrategy:
rollingUpgradeParams:
maxSurge: 1
datacenterRef:
kind: TinkerbellDatacenterConfig
name: my-cluster-name
kubernetesVersion: "1.25"
managementCluster:
name: my-cluster-name
workerNodeGroupConfigurations:
- count: 2
machineGroupRef:
kind: TinkerbellMachineConfig
name: my-cluster-name
name: md-0
upgradeRolloutStrategy:
rollingUpgradeParams:
maxSurge: 1
maxUnavailable: 0
---
...
upgradeRolloutStrategy
Configuration parameters for upgrade strategy.
rollingUpgradeParams
This field accepts configuration parameters for customizing rolling upgrade behavior.
maxSurge
Default: 1
This can not be 0 if maxUnavailable is 0.
The maximum number of machines that can be scheduled above the desired number of machines.
Example: When this is set to n, the new worker node group can be scaled up immediately by n when the rolling upgrade starts. Total number of machines in the cluster (old + new) never exceeds (desired number of machines + n). Once scale down happens and old machines are brought down, the new worker node group can be scaled up further ensuring that the total number of machines running at any time does not exceed the desired number of machines + n.
maxUnavailable
Default: 0
This can not be 0 if MaxSurge is 0.
The maximum number of machines that can be unavailable during the upgrade.
Example: When this is set to n, the old worker node group can be scaled down by n machines immediately when the rolling upgrade starts. Once new machines are ready, old worker node group can be scaled down further, followed by scaling up the new worker node group, ensuring that the total number of machines unavailable at all times during the upgrade never falls below n.
Rolling upgrades with no additional hardware
When maxSurge is set to 0 and maxUnavailable is set to 1, it allows for a rolling upgrade without need for additional hardware. Use this configuration if your workloads can tolerate node unavailability.
NOTE: This could ONLY be used if unavailability of a maximum of 1 node is acceptable. For single node clusters, an additional temporary machine is a must. Alternatively, you may recreate the single node cluster for upgrading and handle data recovery manually.
With this kind of configuration, the rolling upgrade will proceed node by node, deprovision and delete a node fully before re-provisioning it with upgraded version, and re-join it to the cluster. This means that any point during the course of the rolling upgrade, there could be one unavailable node.
Troubleshooting
Attempting to upgrade a cluster with more than 1 minor release will result in receiving the following error.
✅ validate immutable fields
❌ validation failed {"validation": "Upgrade preflight validations", "error": "validation failed with 1 errors: WARNING: version difference between upgrade version (1.21) and server version (1.19) do not meet the supported version increment of +1", "remediation": ""}
Error: failed to upgrade cluster: validations failed
For more errors you can see the troubleshooting section
.
4.2.3.2 - Upgrade vSphere, CloudStack, Nutanix, or Snow cluster
How to perform a cluster upgrade for vSphere, CloudStack, Nutanix, or Snow cluster
EKS Anywhere provides the command upgrade
, which allows you to upgrade
various aspects of your EKS Anywhere cluster.
When you run eksctl anywhere upgrade cluster -f ./cluster.yaml
, EKS Anywhere runs a set of preflight checks to ensure your cluster is ready to be upgraded.
EKS Anywhere then performs the upgrade, modifying your cluster to match the updated specification.
The upgrade command also upgrades core components of EKS Anywhere and lets the user enjoy the latest features, bug fixes and security patches.
NOTE: If an upgrade fails, it is very important not to delete the Docker containers running the KinD bootstrap cluster. During an upgrade, the bootstrap cluster contains critical EKS Anywhere components. If it is deleted after a failed upgrade, they cannot be recovered.
Minor Version Upgrades
Kubernetes has minor releases three times per year
and EKS Distro follows a similar cadence.
EKS Anywhere will add support for new EKS Distro releases as they are released, and you are advised to upgrade your cluster when possible.
Cluster upgrades are not handled automatically and require administrator action to modify the cluster specification and perform an upgrade.
You are advised to upgrade your clusters in development environments first and verify your workloads and controllers are compatible with the new version.
Cluster upgrades are performed in place using a rolling process (similar to Kubernetes Deployments).
Upgrades can only happen one minor version at a time (e.g. 1.24
-> 1.25
).
Control plane components will be upgraded before worker nodes.
A new VM is created with the new version and then an old VM is removed.
This happens one at a time until all the control plane components have been upgraded.
Core component upgrades
EKS Anywhere upgrade
also supports upgrading the following core components:
- Core CAPI
- CAPI providers
- Cilium CNI plugin
- Cert-manager
- Etcdadm CAPI provider
- EKS Anywhere controllers and CRDs
- GitOps controllers (Flux) - this is an optional component, will be upgraded only if specified
The latest versions of these core EKS Anywhere components are embedded into a bundles manifest that the CLI uses to fetch the latest versions
and image builds needed for each component upgrade.
The command detects both component version changes and new builds of the same versioned component.
If there is a new Kubernetes version that is going to get rolled out, the core components get upgraded before the Kubernetes
version.
Irrespective of a Kubernetes version change, the upgrade command will always upgrade the internal EKS
Anywhere components mentioned above to their latest available versions. All upgrade changes are backwards compatible.
Specifically for Snow provider, a new Admin instance is needed when upgrading to the new versions of EKS Anywhere. See Upgrade EKS Anywhere AMIs in Snowball Edge devices
to upgrade and use a new Admin instance in Snow devices. After that, ugrades of other components can be done as described in this document.
Check upgrade components
Before you perform an upgrade, check the current and new versions of components that are ready to upgrade by typing:
Management Cluster
eksctl anywhere upgrade plan cluster -f mgmt-cluster.yaml
Workload Cluster
eksctl anywhere upgrade plan cluster -f workload-cluster.yaml --kubeconfig mgmt/mgmt-eks-a-cluster.kubeconfig
The output should appear similar to the following:
Worker node group name not specified. Defaulting name to md-0.
Warning: The recommended number of control plane nodes is 3 or 5
Worker node group name not specified. Defaulting name to md-0.
Checking new release availability...
NAME CURRENT VERSION NEXT VERSION
EKS-A v0.0.0-dev+build.1000+9886ba8 v0.0.0-dev+build.1105+46598cb
cluster-api v1.0.2+e8c48f5 v1.0.2+1274316
kubeadm v1.0.2+92c6d7e v1.0.2+aa1a03a
vsphere v1.0.1+efb002c v1.0.1+ef26ac1
kubadm v1.0.2+f002eae v1.0.2+f443dcf
etcdadm-bootstrap v1.0.2-rc3+54dcc82 v1.0.0-rc3+df07114
etcdadm-controller v1.0.2-rc3+a817792 v1.0.0-rc3+a310516
To the format output in json, add -o json
to the end of the command line.
To perform a cluster upgrade you can modify your cluster specification kubernetesVersion
field to the desired version.
As an example, to upgrade a cluster with version 1.24 to 1.25 you would change your spec
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
name: dev
spec:
controlPlaneConfiguration:
count: 1
endpoint:
host: "198.18.99.49"
machineGroupRef:
kind: VSphereMachineConfig
name: dev
...
kubernetesVersion: "1.25"
...
NOTE: If you have a custom machine image for your nodes you may also need to update your vsphereMachineConfig
with a new template
.
and then you will run the command
Management Cluster
eksctl anywhere upgrade cluster -f mgmt-cluster.yaml
Workload Cluster
eksctl anywhere upgrade cluster -f workload-cluster.yaml --kubeconfig mgmt/mgmt-eks-a-cluster.kubeconfig
This will upgrade the cluster specification (if specified), upgrade the core components to the latest available versions and apply the changes using the provisioner controllers.
Example output:
✅ control plane ready
✅ worker nodes ready
✅ nodes ready
✅ cluster CRDs ready
✅ cluster object present on workload cluster
✅ upgrade cluster kubernetes version increment
✅ validate immutable fields
🎉 all cluster upgrade preflight validations passed
Performing provider setup and validations
Pausing EKS-A cluster controller reconcile
Pausing Flux kustomization
GitOps field not specified, pause flux kustomization skipped
Creating bootstrap cluster
Installing cluster-api providers on bootstrap cluster
Moving cluster management from workload to bootstrap cluster
Upgrading workload cluster
Moving cluster management from bootstrap to workload cluster
Applying new EKS-A cluster resource; resuming reconcile
Resuming EKS-A controller reconciliation
Updating Git Repo with new EKS-A cluster spec
GitOps field not specified, update git repo skipped
Forcing reconcile Git repo with latest commit
GitOps not configured, force reconcile flux git repo skipped
Resuming Flux kustomization
GitOps field not specified, resume flux kustomization skipped
During the upgrade process, EKS Anywhere pauses the cluster controller reconciliation by adding the paused annotation anywhere.eks.amazonaws.com/paused: true
to the EKS Anywhere cluster, provider datacenterconfig and machineconfig resources, before the components upgrade. After upgrade completes, the annotations are removed so that the cluster controller resumes reconciling the cluster.
Though not recommended, you can manually pause the EKS Anywhere cluster controller reconciliation to perform extended maintenance work or interact with Cluster API objects directly. To do it, you can add the paused annotation to the cluster resource:
kubectl annotate clusters.anywhere.eks.amazonaws.com ${CLUSTER_NAME} -n ${CLUSTER_NAMESPACE} anywhere.eks.amazonaws.com/paused=true
After finishing the task, make sure you resume the cluster reconciliation by removing the paused annotation, so that EKS Anywhere cluster controller can continue working as expected.
kubectl annotate clusters.anywhere.eks.amazonaws.com ${CLUSTER_NAME} -n ${CLUSTER_NAMESPACE} anywhere.eks.amazonaws.com/paused-
Upgradeable Cluster Attributes
EKS Anywhere upgrade
supports upgrading more than just the kubernetesVersion
,
allowing you to upgrade a number of fields simultaneously with the same procedure.
Upgradeable Attributes
Cluster
:
kubernetesVersion
controlPlaneConfig.count
controlPlaneConfigurations.machineGroupRef.name
workerNodeGroupConfigurations.count
workerNodeGroupConfigurations.machineGroupRef.name
etcdConfiguration.externalConfiguration.machineGroupRef.name
identityProviderRefs
(Only for kind:OIDCConfig
, kind:AWSIamConfig
is immutable)
gitOpsRef
(Once set, you can’t change or delete the field’s content later)
VSphereMachineConfig
:
datastore
diskGiB
folder
memoryMiB
numCPUs
resourcePool
template
users
NutanixMachineConfig
:
vcpusPerSocket
vcpuSockets
memorySize
image
cluster
subnet
systemDiskSize
SnowMachineConfig
:
amiID
instanceType
physicalNetworkConnector
sshKeyName
devices
containersVolume
osFamily
network
OIDCConfig
:
clientID
groupsClaim
groupsPrefix
issuerUrl
requiredClaims.claim
requiredClaims.value
usernameClaim
usernamePrefix
AWSIamConfig
:
EKS Anywhere upgrade
also supports adding more worker node groups post-creation.
To add more worker node groups, modify your cluster config file to define the additional group(s).
Example:
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
name: dev
spec:
controlPlaneConfiguration:
...
workerNodeGroupConfigurations:
- count: 2
machineGroupRef:
kind: VSphereMachineConfig
name: my-cluster-machines
name: md-0
- count: 2
machineGroupRef:
kind: VSphereMachineConfig
name: my-cluster-machines
name: md-1
...
Worker node groups can use the same machineGroupRef as previous groups, or you can define a new machine configuration for your new group.
Resume upgrade after failure
EKS Anywhere supports re-running the upgrade
command post-failure as an experimental feature.
If the upgrade
command fails, the user can manually fix the issue (when applicable) and simply rerun the same command. At this point, the CLI will skip the completed tasks, restore the state of the operation, and resume the upgrade process.
The completed tasks are stored in the generated
folder as a file named <clusterName>-checkpoint.yaml
.
This feature is experimental. To enable this feature, export the following environment variable:
export CHECKPOINT_ENABLED=true
Troubleshooting
Attempting to upgrade a cluster with more than 1 minor release will result in receiving the following error.
✅ validate immutable fields
❌ validation failed {"validation": "Upgrade preflight validations", "error": "validation failed with 1 errors: WARNING: version difference between upgrade version (1.21) and server version (1.19) do not meet the supported version increment of +1", "remediation": ""}
Error: failed to upgrade cluster: validations failed
For more errors you can see the troubleshooting section
.
4.2.4 - Etcd Backup and Restore
How to backup and restore an EKS Anywhere cluster
NOTE: External etcd topology is supported for vSphere and CloudStack clusters, but not yet for Bare Metal or Nutanix clusters.
This page contains steps for backing up a cluster by taking an etcd snapshot, and restoring the cluster from a snapshot. These steps are for an EKS Anywhere cluster provisioned using the external etcd topology (selected by default) and Ubuntu OVAs.
Use case
EKS-Anywhere clusters use etcd as the backing store. Taking a snapshot of etcd backs up the entire cluster data. This can later be used to restore a cluster back to an earlier state if required. Etcd backups can be taken prior to cluster upgrade, so if the upgrade doesn’t go as planned you can restore from the backup.
Backup
Etcd offers a built-in snapshot mechanism. You can take a snapshot using the etcdctl snapshot save
command by following the steps given below.
- Login to any one of the etcd VMs
ssh -i $PRIV_KEY ec2-user@$ETCD_VM_IP
- Run the etcdctl command to take a snapshot with the following steps
sudo su
source /etc/etcd/etcdctl.env
etcdctl snapshot save snapshot.db
chown ec2-user snapshot.db
- Exit the VM. Copy the snapshot from the VM to your local/admin setup where you can save snapshots in a secure place. Before running scp, make sure you don’t already have a snapshot file saved by the same name locally.
scp -i $PRIV_KEY ec2-user@$ETCD_VM_IP:/home/ec2-user/snapshot.db .
NOTE: This snapshot file contains all information stored in the cluster, so make sure you save it securely (encrypt it).
Restore
Restoring etcd is a 2-part process. The first part is restoring etcd using the snapshot, creating a new data-dir for etcd. The second part is replacing the current etcd data-dir with the one generated after restore. During etcd data-dir replacement, we cannot have any kube-apiserver instances running in the cluster. So we will first stop all instances of kube-apiserver and other controlplane components using the following steps for every controlplane VM:
Pausing Etcdadm controller reconcile
During restore, it is required to pause the Etcdadm controller reconcile for the target cluster (whether it is management or workload cluster). To do that, you need to add a cluster.x-k8s.io/paused
annotation to the target cluster’s etcdadmclusters
resource. For example,
kubectl annotate etcdadmclusters workload-cluster-1-etcd cluster.x-k8s.io/paused=true -n eksa-system --kubeconfig mgmt-cluster.kubeconfig
Stopping the controlplane components
- Login to a controlplane VM
ssh -i $PRIV_KEY ec2-user@$CONTROLPLANE_VM_IP
- Stop controlplane components by moving the static pod manifests under a temp directory:
sudo su
mkdir temp-manifests
mv /etc/kubernetes/manifests/*.yaml temp-manifests
- Repeat these steps for all other controlplane VMs
After this you can restore etcd from a saved snapshot using the etcdctl snapshot save
command following the steps given below.
Restoring from the snapshot
- The snapshot file should be made available in every etcd VM of the cluster. You can copy it to each etcd VM using this command:
scp -i $PRIV_KEY snapshot.db ec2-user@$ETCD_VM_IP:/home/ec2-user
- To run the etcdctl snapshot restore command, you need to provide the following configuration parameters:
- name: This is the name of the etcd member. The value of this parameter should match the value used while starting the member. This can be obtained by running:
export ETCD_NAME=$(cat /etc/etcd/etcd.env | grep ETCD_NAME | awk -F'=' '{print $2}')
- initial-advertise-peer-urls: This is the advertise peer URL with which this etcd member was configured. It should be the exact value with which this etcd member was started. This can be obtained by running:
export ETCD_INITIAL_ADVERTISE_PEER_URLS=$(cat /etc/etcd/etcd.env | grep ETCD_INITIAL_ADVERTISE_PEER_URLS | awk -F'=' '{print $2}')
- initial-cluster: This should be a comma-separated mapping of etcd member name and its peer URL. For this, get the
ETCD_NAME
and ETCD_INITIAL_ADVERTISE_PEER_URLS
values for each member and join them. And then use this exact value for all etcd VMs. For example, for a 3 member etcd cluster this is what the value would look like (The command below cannot be run directly without substituting the required variables and is meant to be an example)
export ETCD_INITIAL_CLUSTER=${ETCD_NAME_1}=${ETCD_INITIAL_ADVERTISE_PEER_URLS_1},${ETCD_NAME_2}=${ETCD_INITIAL_ADVERTISE_PEER_URLS_2},${ETCD_NAME_3}=${ETCD_INITIAL_ADVERTISE_PEER_URLS_3}
- initial-cluster-token: Set this to a unique value and use the same value for all etcd members of the cluster. It can be any value such as
etcd-cluster-1
as long as it hasn’t been used before.
- Gather the required env vars for the restore command
cat <<EOF >> restore.env
export ETCD_NAME=$(cat /etc/etcd/etcd.env | grep ETCD_NAME | awk -F'=' '{print $2}')
export ETCD_INITIAL_ADVERTISE_PEER_URLS=$(cat /etc/etcd/etcd.env | grep ETCD_INITIAL_ADVERTISE_PEER_URLS | awk -F'=' '{print $2}')
EOF
cat /etc/etcd/etcdctl.env >> restore.env
- Make sure you form the correct
ETCD_INITIAL_CLUSTER
value using all etcd members, and set it as an env var in the restore.env file created in the above step.
- Once you have obtained all the right values, run the following commands to restore etcd replacing the required values:
sudo su
source restore.env
etcdctl snapshot restore snapshot.db --name=${ETCD_NAME} --initial-cluster=${ETCD_INITIAL_CLUSTER} --initial-cluster-token=etcd-cluster-1 --initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS}
- This is going to create a new data-dir for the restored contents under a new directory
{ETCD_NAME}.etcd
. To start using this, restart etcd with the new data-dir with the following steps:
systemctl stop etcd.service
mv /var/lib/etcd/member /var/lib/etcd/member.bak
mv ${ETCD_NAME}.etcd/member /var/lib/etcd/
- Perform this directory swap on all etcd VMs, and then start etcd again on those VMs
systemctl start etcd.service
NOTE: Until the etcd process is started on all VMs, it might appear stuck on the VMs where it was started first, but this should be temporary.
Starting the controlplane components
- Login to a controlplane VM
ssh -i $PRIV_KEY ec2-user@$CONTROLPLANE_VM_IP
- Start the controlplane components by moving back the static pod manifests from under the temp directory to the /etc/kubernetes/manifests directory:
mv temp-manifests/*.yaml /etc/kubernetes/manifests
- Repeat these steps for all other controlplane VMs
- It may take a few minutes for the kube-apiserver and the other components to get restarted. After this you should be able to access all objects present in the cluster at the time the backup was taken.
Resuming Etcdadm controller reconcile
Resume Etcdadm controller reconcile for the target cluster by removing the cluster.x-k8s.io/paused
annotation in the target cluster’s etcdadmclusters
resource. For example,
kubectl annotate etcdadmclusters workload-cluster-1-etcd cluster.x-k8s.io/paused- -n eksa-system
4.2.5 - Verify cluster
How to verify an EKS Anywhere cluster is running properly
To verify that a cluster control plane is up and running, use the kubectl
command to show that the control plane pods are all running.
kubectl get po -A -l control-plane=controller-manager
NAMESPACE NAME READY STATUS RESTARTS AGE
capi-kubeadm-bootstrap-system capi-kubeadm-bootstrap-controller-manager-57b99f579f-sd85g 2/2 Running 0 47m
capi-kubeadm-control-plane-system capi-kubeadm-control-plane-controller-manager-79cdf98fb8-ll498 2/2 Running 0 47m
capi-system capi-controller-manager-59f4547955-2ks8t 2/2 Running 0 47m
capi-webhook-system capi-controller-manager-bb4dc9878-2j8mg 2/2 Running 0 47m
capi-webhook-system capi-kubeadm-bootstrap-controller-manager-6b4cb6f656-qfppd 2/2 Running 0 47m
capi-webhook-system capi-kubeadm-control-plane-controller-manager-bf7878ffc-rgsm8 2/2 Running 0 47m
capi-webhook-system capv-controller-manager-5668dbcd5-v5szb 2/2 Running 0 47m
capv-system capv-controller-manager-584886b7bd-f66hs 2/2 Running 0 47m
You may also check the status of the cluster control plane resource directly.
This can be especially useful to verify clusters with multiple control plane nodes after an upgrade.
kubectl get kubeadmcontrolplanes.controlplane.cluster.x-k8s.io
NAME INITIALIZED API SERVER AVAILABLE VERSION REPLICAS READY UPDATED UNAVAILABLE
supportbundletestcluster true true v1.20.7-eks-1-20-6 1 1 1
To verify that the expected number of cluster worker nodes are up and running, use the kubectl
command to show that nodes are Ready
.
This will confirm that the expected number of worker nodes are present.
Worker nodes are named using the cluster name followed by the worker node group name (example: my-cluster-md-0)
kubectl get nodes
NAME STATUS ROLES AGE VERSION
supportbundletestcluster-md-0-55bb5ccd-mrcf9 Ready <none> 4m v1.20.7-eks-1-20-6
supportbundletestcluster-md-0-55bb5ccd-zrh97 Ready <none> 4m v1.20.7-eks-1-20-6
supportbundletestcluster-mdrwf Ready control-plane,master 5m v1.20.7-eks-1-20-6
To test a workload in your cluster you can try deploying the hello-eks-anywhere
.
4.2.6 - Add cluster integrations
How to add integrations to an EKS Anywhere cluster
EKS Anywhere offers AWS support for certain third-party vendor components,
namely Ubuntu TLS, Cilium, and Flux.
It also provides flexibility for you to integrate with your choice of tools in other areas.
Below is a list of example third-party tools your consideration.
For a full list of partner integration options, please visit Amazon EKS Anywhere Partner page
.
Note
The solutions listed on this page have not been tested by AWS and are not covered by the EKS Anywhere Support Subscription.
4.2.7 - Reboot nodes
How to properly reboot a node in an EKS Anywhere cluster
If you need to reboot a node in your cluster for maintenance or any other reason, performing the following steps will help prevent possible disruption of services on those nodes:
Warning
Rebooting a cluster node as described here is good for all nodes, but is critically important when rebooting a Bottlerocket node running the boots
service on a Bare Metal cluster.
If it does go down while running the boots
service, the Bottlerocket node will not be able to boot again until the boots
service is restored on another machine. This is because Bottlerocket must get its address from a DHCP service.
-
Cordon the node so no further workloads are scheduled to run on it:
kubectl cordon <node-name>
-
Drain the node of all current workloads:
kubectl drain <node-name>
-
Shut down. Using the appropriate method for your provider, shut down the node.
-
Perform system maintenance or other task you need to do on the node and boot up the node.
-
Uncordon the node so that it can begin receiving workloads again.
kubectl uncordon <node-name>
4.2.8 - Connect cluster to console
Connect a cluster to the EKS console
The AWS EKS Connector lets you connect your EKS Anywhere cluster to the AWS EKS console, where you can see your the EKS Anywhere cluster, its configuration, workloads, and their status.
EKS Connector is a software agent that can be deployed on your EKS Anywhere cluster, enabling the cluster to register with the EKS console.
Visit AWS EKS Connector
for details.
4.2.9 - License cluster
How to license your cluster.
If you are are licensing an existing cluster, apply the following secret to your cluster (replacing my-license-here
with your license):
kubectl apply -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
name: eksa-license
namespace: eksa-system
stringData:
license: "my-license-here"
type: Opaque
EOF
4.2.10 - Multus CNI plugin configuration
EKS Anywhere configuration for Multus CNI plugin
NOTE: Currently, Multus support is only available with the EKS Anywhere Bare Metal provider.
The vSphere and CloudStack providers, do not have multi-network support for cluster machines.
Once multiple network support is added to those clusters, Multus CNI can be supported.
Multus CNI
is a container network interface plugin for Kubernetes that enables attaching multiple network interfaces to pods.
In Kubernetes, each pod has only one network interface by default, other than local loopback.
With Multus, you can create multi-homed pods that have multiple interfaces.
Multus acts a as ‘meta’ plugin that can call other CNI plugins to configure additional interfaces.
Pre-Requisites
Given that Multus CNI is used to create pods with multiple network interfaces, the cluster machines that these pods run on need to have multiple network interfaces attached and configured.
The interfaces on multi-homed pods need to map to these interfaces on the machines.
For Bare Metal clusters using the Tinkerbell provider, the cluster machines need to have multiple network interfaces cabled in and appropriate network configuration put in place during machine provisioning.
Overview of Multus setup
The following diagrams show the result of two applications (app1 and app2) running in pods that use the Multus plugin to communicate over two network interfaces (eth0 and net1) from within the pods.
The Multus plugin uses two network interfaces on the worker node (eth0 and eth1) to provide communications outside of the node.

Follow the procedure below to set up Multus as illustrated in the previous diagrams.
Deploying Multus using a Daemonset will spin up pods that install a Multus binary and configure Multus for usage in every node in the cluster.
Here are the steps for doing that.
-
Clone the Multus CNI repo:
git clone https://github.com/k8snetworkplumbingwg/multus-cni.git && cd multus-cni
-
Apply Multus daemonset to your EKS Anywhere cluster:
kubectl apply -f ./deployments/multus-daemonset-thick-plugin.yml
-
Verify that you have Multus pods running:
kubectl get pods --all-namespaces | grep -i multus
-
Check that Multus is running:
kubectl get pods -A | grep multus
Output:
kube-system kube-multus-ds-bmfjs 1/1 Running 0 3d1h
kube-system kube-multus-ds-fk2sk 1/1 Running 0 3d1h
Create Network Attachment Definition
You need to create a Network Attachment Definition for the CNI you wish to use as the plugin for the additional interface.
You can verify that your intended CNI plugin is supported by ensuring that the binary corresponding to that CNI plugin is present in the node’s /opt/cni/bin
directory.
Below is an example of a Network Attachment Definition yaml:
cat <<EOF | kubectl create -f -
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: ipvlan-conf
spec:
config: '{
"cniVersion": "0.3.0",
"type": "ipvlan",
"master": "eth1",
"mode": "l3",
"ipam": {
"type": "host-local",
"subnet": "198.17.0.0/24",
"rangeStart": "198.17.0.200",
"rangeEnd": "198.17.0.216",
"routes": [
{ "dst": "0.0.0.0/0" }
],
"gateway": "198.17.0.1"
}
}'
EOF
Note that eth1
is used as the master parameter.
This master parameter should match the interface name on the hosts in your cluster.
Verify the configuration
Type the following to verify the configuration you created:
kubectl get network-attachment-definitions
kubectl describe network-attachment-definitions ipvlan-conf
Deploy sample applications with network attachment
-
Create a sample application 1 (app1) with network annotation created in the previous steps:
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: app1
annotations:
k8s.v1.cni.cncf.io/networks: ipvlan-conf
spec:
containers:
- name: app1
command: ["/bin/sh", "-c", "trap : TERM INT; sleep infinity & wait"]
image: alpine
EOF
-
Create a sample application 2 (app2) with the network annotation created in the previous step:
cat <<EOF | kubectl apply -f - kube
apiVersion: v1
kind: Pod
metadata:
name: app2
annotations:
k8s.v1.cni.cncf.io/networks: ipvlan-conf
spec:
containers:
- name: app2
command: ["/bin/sh", "-c", "trap : TERM INT; sleep infinity & wait"]
image: alpine
EOF
-
Verify that the additional interfaces were created on these application pods using the defined network attachment:
kubectl exec -it app1 -- ip a
Output:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
*2: net1@if3: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UNKNOWN
link/ether 00:50:56:9a:84:3b brd ff:ff:ff:ff:ff:ff
inet 198.17.0.200/24 brd 198.17.0.255 scope global net1
valid_lft forever preferred_lft forever
inet6 fe80::50:5600:19a:843b/64 scope link
valid_lft forever preferred_lft forever*
31: eth0@if32: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 0a:9e:a0:b4:21:05 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.218/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::89e:a0ff:feb4:2105/64 scope link
valid_lft forever preferred_lft forever
kubectl exec -it app2 -- ip a
Output:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
*2: net1@if3: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UNKNOWN
link/ether 00:50:56:9a:84:3b brd ff:ff:ff:ff:ff:ff
inet 198.17.0.201/24 brd 198.17.0.255 scope global net1
valid_lft forever preferred_lft forever
inet6 fe80::50:5600:29a:843b/64 scope link
valid_lft forever preferred_lft forever*
33: eth0@if34: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether b2:42:0a:67:c0:48 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.210/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::b042:aff:fe67:c048/64 scope link
valid_lft forever preferred_lft forever
Note that both pods got the new interface net1. Also, the additional network interface on each pod got assigned an IP address out of the range specified by the Network Attachment Definition.
-
Test the network connectivity across these pods for Multus interfaces:
kubectl exec -it app1 -- ping -I net1 198.17.0.201
Output:
PING 198.17.0.201 (198.17.0.201): 56 data bytes
64 bytes from 198.17.0.201: seq=0 ttl=64 time=0.074 ms
64 bytes from 198.17.0.201: seq=1 ttl=64 time=0.077 ms
64 bytes from 198.17.0.201: seq=2 ttl=64 time=0.078 ms
64 bytes from 198.17.0.201: seq=3 ttl=64 time=0.077 ms
kubectl exec -it app2 -- ping -I net1 198.17.0.200
Output:
PING 198.17.0.200 (198.17.0.200): 56 data bytes
64 bytes from 198.17.0.200: seq=0 ttl=64 time=0.074 ms
64 bytes from 198.17.0.200: seq=1 ttl=64 time=0.077 ms
64 bytes from 198.17.0.200: seq=2 ttl=64 time=0.078 ms
64 bytes from 198.17.0.200: seq=3 ttl=64 time=0.077 ms
4.2.11 - Authenticate cluster with AWS IAM Authenticator
Configure AWS IAM Authenticator to authenticate user access to the cluster
AWS IAM Authenticator Support (optional)
EKS Anywhere supports configuring AWS IAM Authenticator
as an authentication provider for clusters.
When you create a cluster with IAM Authenticator enabled, EKS Anywhere
- Installs
aws-iam-authenticator
server as a DaemonSet on the workload cluster.
- Configures the Kubernetes API Server to communicate with iam authenticator using a token authentication webhook
.
- Creates the necessary ConfigMaps based on user options.
Note
Enabling IAM Authenticator needs to be done during cluster creation.
Create IAM Authenticator enabled cluster
Generate your cluster configuration and add the necessary IAM Authenticator configuration. For a full spec reference check AWSIamConfig
.
Create an EKS Anywhere cluster as follows:
CLUSTER_NAME=my-cluster-name
eksctl anywhere create cluster -f ${CLUSTER_NAME}.yaml
Example AWSIamConfig configuration
This example uses a region in the default aws partition and EKSConfigMap
as backendMode
. Also, the IAM ARNs are mapped to the kubernetes system:masters
group.
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
name: my-cluster-name
spec:
...
# IAM Authenticator
identityProviderRefs:
- kind: AWSIamConfig
name: aws-iam-auth-config
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: AWSIamConfig
metadata:
name: aws-iam-auth-config
spec:
awsRegion: us-west-1
backendMode:
- EKSConfigMap
mapRoles:
- roleARN: arn:aws:iam::XXXXXXXXXXXX:role/myRole
username: myKubernetesUsername
groups:
- system:masters
mapUsers:
- userARN: arn:aws:iam::XXXXXXXXXXXX:user/myUser
username: myKubernetesUsername
groups:
- system:masters
partition: aws
Note
When using backend mode
CRD
, the
mapRoles
and
mapUsers
are not required. For more details on configuring CRD mode, refer to
CRD.
Authenticating with IAM Authenticator
After your cluster is created you may now use the mapped IAM ARNs to authenticate to the cluster.
EKS Anywhere generates a KUBECONFIG
file in your local directory that uses aws-iam-authenticator client
to authenticate with the cluster. The file can be found at
${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-aws.kubeconfig
Steps
-
Ensure the IAM role/user ARN mapped in the cluster is configured on the local machine from which you are trying to access the cluster.
-
Install the aws-iam-authenticator client
binary on the local machine.
- We recommend installing the binary referenced in the latest
release manifest
of the kubernetes version used when creating the cluster.
- The below commands can be used to fetch the installation uri for clusters created with
1.21
kubernetes version and OS linux
.
CLUSTER_NAME=my-cluster-name
KUBERNETES_VERSION=1.21
export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
EKS_D_MANIFEST_URL=$(kubectl get bundles $CLUSTER_NAME -o jsonpath="{.spec.versionsBundles[?(@.kubeVersion==\"$KUBERNETES_VERSION\")].eksD.manifestUrl}")
OS=linux
curl -fsSL $EKS_D_MANIFEST_URL | yq e '.status.components[] | select(.name=="aws-iam-authenticator") | .assets[] | select(.os == '"\"$OS\""' and .type == "Archive") | .archive.uri' -
-
Export the generated IAM Authenticator based KUBECONFIG
file.
export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-aws.kubeconfig
-
Run kubectl
commands to check cluster access. Example,
Modify IAM Authenticator mappings
EKS Anywhere supports modifying IAM ARNs that are mapped on the cluster. The mappings can be modified by either running the upgrade cluster
command or using GitOps
.
upgrade command
The mapRoles
and mapUsers
lists in AWSIamConfig
can be modified when running the upgrade cluster
command from EKS Anywhere.
As an example, let’s add another IAM user to the above example configuration.
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: AWSIamConfig
metadata:
name: aws-iam-auth-config
spec:
...
mapUsers:
- userARN: arn:aws:iam::XXXXXXXXXXXX:user/myUser
username: myKubernetesUsername
groups:
- system:masters
- userARN: arn:aws:iam::XXXXXXXXXXXX:user/anotherUser
username: anotherKubernetesUsername
partition: aws
and then run the upgrade command
CLUSTER_NAME=my-cluster-name
eksctl anywhere upgrade cluster -f ${CLUSTER_NAME}.yaml
EKS Anywhere now updates the role mappings for IAM authenticator in the cluster and a new user gains access to the cluster.
GitOps
If the cluster created has GitOps configured, then the mapRoles
and mapUsers
list in AWSIamConfig
can be modified by the GitOps controller. For GitOps configuration details refer to Manage Cluster with GitOps
.
Note
GitOps support for the AWSIamConfig
is currently only on management or self-managed clusters.
- Clone your git repo and modify the cluster specification.
The default path for the cluster file is:
clusters/$CLUSTER_NAME/eksa-system/eksa-cluster.yaml
- Modify the
AWSIamConfig
object and add to the mapRoles
and mapUsers
object lists.
- Commit the file to your git repository
git add eksa-cluster.yaml
git commit -m 'Adding IAM Authenticator access ARNs'
git push origin main
EKS Anywhere GitOps Controller now updates the role mappings for IAM authenticator in the cluster and users gains access to the cluster.
4.2.12 - Manage cluster with GitOps
Use Flux to manage clusters with GitOps
NOTE: GitOps support is available for vSphere clusters, but is not yet available for Bare Metal clusters
GitOps Support (optional)
EKS Anywhere supports a GitOps
workflow for the management of your cluster.
When you create a cluster with GitOps enabled, EKS Anywhere will automatically commit your cluster configuration to the provided GitHub repository and install a GitOps toolkit on your cluster which watches that committed configuration file.
You can then manage the scale of the cluster by making changes to the version controlled cluster configuration file and committing the changes.
Once a change has been detected by the GitOps controller running in your cluster, the scale of the cluster will be adjusted to match the committed configuration file.
If you’d like to learn more about GitOps, and the associated best practices, check out this introduction from Weaveworks
.
NOTE: Installing a GitOps controller can be done during cluster creation or through upgrade.
In the event that GitOps installation fails, EKS Anywhere cluster creation will continue.
Supported Cluster Properties
Currently, you can manage a subset of cluster properties with GitOps:
Management Cluster
Cluster
:
workerNodeGroupConfigurations.count
workerNodeGroupConfigurations.machineGroupRef.name
WorkerNodes VSphereMachineConfig
:
datastore
diskGiB
folder
memoryMiB
numCPUs
resourcePool
template
users
Workload Cluster
Cluster
:
kubernetesVersion
controlPlaneConfiguration.count
controlPlaneConfiguration.machineGroupRef.name
workerNodeGroupConfigurations.count
workerNodeGroupConfigurations.machineGroupRef.name
identityProviderRefs
(Only for kind:OIDCConfig
, kind:AWSIamConfig
is immutable)
ControlPlane / Etcd / WorkerNodes VSphereMachineConfig
:
datastore
diskGiB
folder
memoryMiB
numCPUs
resourcePool
template
users
OIDCConfig
:
clientID
groupsClaim
groupsPrefix
issuerUrl
requiredClaims.claim
requiredClaims.value
usernameClaim
usernamePrefix
Any other changes to the cluster configuration in the git repository will be ignored.
If an immutable field has been changed in a Git repository, there are two ways to find the error message:
- If a notification webhook is set up, check the error message in notification channel.
- Check the Flux Kustomization Controller log:
kubectl logs -f -n flux-system kustomize-controller-******
for error message containing text similar to Invalid value: 1: field is immutable
Getting Started with EKS Anywhere GitOps with Github
In order to use GitOps to manage cluster scaling, you need a couple of things:
Create a GitHub Personal Access Token
Create a Personal Access Token (PAT)
to access your provided GitHub repository.
It must be scoped for all repo
permissions.
NOTE: GitOps configuration only works with hosted github.com and will not work on a self-hosted GitHub Enterprise instances.
This PAT should have at least the following permissions:

NOTE: The PAT must belong to the owner
of the repository
or, if using an organization as the owner
, the creator of the PAT
must have repo permission in that organization.
You need to set your PAT as the environment variable $EKSA_GITHUB_TOKEN to use it during cluster creation:
export EKSA_GITHUB_TOKEN=ghp_MyValidPersonalAccessTokenWithRepoPermissions
Create GitOps configuration repo
If you have an existing repo you can set that as your repository name in the configuration.
If you specify a repo in your FluxConfig
which does not exist EKS Anywhere will create it for you.
If you would like to create a new repo you can click here
to create a new repo.
If your repository contains multiple cluster specification files, store them in sub-folders and specify the configuration path
in your cluster specification.
In order to accommodate the management cluster feature, the CLI will now structure the repo directory following a new convention:
clusters
└── management-cluster
├── flux-system
│ └── ...
├── management-cluster
│ └── eksa-system
│ └── eksa-cluster.yaml
│ └── kustomization.yaml
├── workload-cluster-1
│ └── eksa-system
│ └── eksa-cluster.yaml
└── workload-cluster-2
└── eksa-system
└── eksa-cluster.yaml
By default, Flux kustomization reconciles at the management cluster’s root level (./clusters/management-cluster
), so both the management cluster and all the workload clusters it manages are synced.
Example GitOps cluster configuration for Github
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
name: mynewgitopscluster
spec:
... # collapsed cluster spec fields
# Below added for gitops support
gitOpsRef:
kind: FluxConfig
name: my-cluster-name
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: FluxConfig
metadata:
name: my-cluster-name
spec:
github:
personal: true
repository: mygithubrepository
owner: mygithubusername
Create a GitOps enabled cluster
Generate your cluster configuration and add the GitOps configuration.
For a full spec reference see the Cluster Spec reference
.
NOTE: After your cluster has been created the cluster configuration will automatically be committed to your git repo.
-
Create an EKS Anywhere cluster with GitOps enabled.
CLUSTER_NAME=gitops
eksctl anywhere create cluster -f ${CLUSTER_NAME}.yaml
Enable GitOps in an existing cluster
You can also install Flux and enable GitOps in an existing cluster by running the upgrade command with updated cluster configuration.
For a full spec reference see the Cluster Spec reference
.
-
Upgrade an EKS Anywhere cluster with GitOps enabled.
CLUSTER_NAME=gitops
eksctl anywhere upgrade cluster -f ${CLUSTER_NAME}.yaml
Test GitOps controller
After your cluster has been created, you can test the GitOps controller by modifying the cluster specification.
-
Clone your git repo and modify the cluster specification.
The default path for the cluster file is:
clusters/$CLUSTER_NAME/eksa-system/eksa-cluster.yaml
-
Modify the workerNodeGroupsConfigurations[0].count
field with your desired changes.
-
Commit the file to your git repository
git add eksa-cluster.yaml
git commit -m 'Scaling nodes for test'
git push origin main
-
The Flux controller will automatically make the required changes.
If you updated your node count, you can use this command to see the current node state.
Getting Started with EKS Anywhere GitOps with any Git source
You can configure EKS Anywhere to use a generic git repository as the source of truth for GitOps by providing a FluxConfig
with a git
configuration.
EKS Anywhere requires a valid SSH Known Hosts file and SSH Private key in order to connect to your repository and bootstrap Flux.
Create a Git repository for use by EKS Anywhere and Flux
When using the git
provider, EKS Anywhere requires that the configuration repository be pre-initialized.
You may re-use an existing repo or use the same repo for multiple management clusters.
Create the repository through your git provider and initialize it with a README.md
documenting the purpose of the repository.
Create a Private Key for use by EKS Anywhere and Flux
EKS Anywhere requires a private key to authenticate to your git repository, push the cluster configuration, and configure Flux for ongoing management and monitoring of that configuration.
The private key should have permissions to read and write from the repository in question.
It is recommended that you create a new private key for use exclusively by EKS Anywhere.
You can use ssh-keygen
to generate a new key.
ssh-keygen -t ecdsa -C "my_email@example.com"
Please consult the documentation for your git provider to determine how to add your corresponding public key; for example, if using Github enterprise, you can find the documentation for adding a public key to your github account here
.
Add your private key to your SSH agent on your management machine
When using a generic git provider, EKS Anywhere requires that your management machine has a running SSH agent and the private key be added to that SSH agent.
You can start an SSH agent and add your private key by executing the following in your current session:
eval "$(ssh-agent -s)" && ssh-add $EKSA_GIT_PRIVATE_KEY
Create an SSH Known Hosts file for use by EKS Anywhere and Flux
EKS Anywhere needs an SSH known hosts file to verify the identity of the remote git host.
A path to a valid known hosts file must be provided to the EKS Anywhere command line via the environment variable EKSA_GIT_KNOWN_HOSTS
.
For example, if you have a known hosts file at /home/myUser/.ssh/known_hosts
that you want EKS Anywhere to use, set the environment variable EKSA_GIT_KNOWN_HOSTS
to the path to that file, /home/myUser/.ssh/known_hosts
.
export EKSA_GIT_KNOWN_HOSTS=/home/myUser/.ssh/known_hosts
While you can use your pre-existing SSH known hosts file, it is recommended that you generate a new known hosts file for use by EKS Anywhere that contains only the known-hosts entries required for your git host and key type.
For example, if you wanted to generate a known hosts file for a git server located at example.com
with key type ecdsa
, you can use the OpenSSH utility ssh-keyscan
:
ssh-keyscan -t ecdsa example.com >> my_eksa_known_hosts
This will generate a known hosts file which contains only the entry necessary to verify the identity of example.com when using an ecdsa
based private key file.
Example FluxConfig cluster configuration for a generic git provider
For a full spec reference see the Cluster Spec reference
.
NOTE: The repositoryUrl
value is of the format ssh://git@provider.com/$REPO_OWNER/$REPO_NAME.git
. This may differ from the default SSH URL given by your provider. For Example, the github.com user interface provides an SSH URL containing a :
before the repository owner, rather than a /
. Make sure to replace this :
with a /
, if present.
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
name: mynewgitopscluster
spec:
... # collapsed cluster spec fields
# Below added for gitops support
gitOpsRef:
kind: FluxConfig
name: my-cluster-name
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: FluxConfig
metadata:
name: my-cluster-name
spec:
git:
repositoryUrl: ssh://git@provider.com/myAccount/myClusterGitopsRepo.git
sshKeyAlgorithm: ecdsa
Manage separate workload clusters using Gitops
Follow these steps if you want to use your initial cluster to create and manage separate workload clusters via Gitops.
Prerequisites
-
An existing EKS Anywhere cluster with Gitops enabled.
If your existing cluster does not have Gitops installed, see Enable Gitops in an existing cluster.
.
-
A cluster configuration file for your new workload cluster.
Create cluster using Gitops
-
Clone your git repo and add the new cluster specification.
Be sure to follow the directory structure defined here
:
clusters/<management-cluster-name>/$CLUSTER_NAME/eksa-system/eksa-cluster.yaml
NOTE: Specify the namespace
for all EKS Anywhere objects when you are using GitOps to create new workload clusters (even for the default
namespace, use namespace: default
on those objects).
Ensure workload cluster object names are distinct from management cluster object names. Be sure to set the managementCluster
field to identify the name of the management cluster.
Make sure there is a kustomization.yaml
file under the namespace directory for the management cluster. Creating a Gitops enabled management cluster with eksctl
should create the kustomization.yaml
file automatically.
-
Commit the file to your git repository.
git add clusters/<management-cluster-name>/$CLUSTER_NAME/eksa-system/eksa-cluster.yaml
git commit -m 'Creating new workload cluster'
git push origin main
-
The Flux controller will automatically make the required changes.
You can list the workload clusters managed by the management cluster.
export KUBECONFIG=${PWD}/${MGMT_CLUSTER_NAME}/${MGMT_CLUSTER_NAME}-eks-a-cluster.kubeconfig
kubectl get clusters
-
The kubeconfig for your new cluster is stored as a secret on the management cluster.
You can get credentials and run the test application on your new workload cluster as follows:
kubectl get secret -n eksa-system w01-kubeconfig -o jsonpath=‘{.data.value}' | base64 —decode > w01.kubeconfig
export KUBECONFIG=w01.kubeconfig
kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
Upgrade cluster using Gitops
-
To upgrade the cluster using Gitops, modify the workload cluster yaml file with the desired changes.
-
Commit the file to your git repository.
git add eksa-cluster.yaml
git commit -m 'Scaling nodes on new workload cluster'
git push origin main
Delete cluster using Gitops
- To delete the cluster using Gitops, delete the workload cluster yaml file from your repository and commit those changes.
git rm eksa-cluster.yaml
git commit -m 'Deleting workload cluster'
git push origin main
4.2.13 - Manage cluster with Terraform
Use Terraform to manage EKS Anywhere Clusters
NOTE: Support for using Terraform to manage and modify an EKS Anywhere cluster is available for vSphere, Snow and Nutanix clusters, but not yet for Bare Metal or CloudStack clusters.
This guide explains how you can use Terraform to manage and modify an EKS Anywhere cluster.
The guide is meant for illustrative purposes and is not a definitive approach to building production systems with Terraform and EKS Anywhere.
At its heart, EKS Anywhere is a set of Kubernetes CRDs, which define an EKS Anywhere cluster,
and a controller, which moves the cluster state to match these definitions.
These CRDs, and the EKS-A controller, live on the management cluster or
on a self-managed cluster.
We can manage a subset of the fields in the EKS Anywhere CRDs with any tool that can interact with the Kubernetes API, like kubectl
or, in this case, the Terraform Kubernetes provider.
In this guide, we’ll show you how to import your EKS Anywhere cluster into Terraform state and
how to scale your EKS Anywhere worker nodes using the Terraform Kubernetes provider.
Prerequisites
-
An existing EKS Anywhere cluster
-
the latest version of Terraform
-
the latest version of tfk8s
, a tool for converting Kubernetes manifest files to Terraform HCL
Guide
- Create an EKS-A management cluster, or a self-managed stand-alone cluster.
-
Set up the Terraform Kubernetes provider
Make sure your KUBECONFIG environment variable is set
export KUBECONFIG=/path/to/my/kubeconfig.kubeconfig
Set an environment variable with your cluster name:
export MY_EKSA_CLUSTER="myClusterName"
cat << EOF > ./provider.tf
provider "kubernetes" {
config_path = "${KUBECONFIG}"
}
EOF
-
Get tfk8s
and use it to convert your EKS Anywhere cluster Kubernetes manifest into Terraform HCL:
- Install tfk8s
- Convert the manifest into Terraform HCL:
kubectl get cluster ${MY_EKSA_CLUSTER} -o yaml | tfk8s --strip -o ${MY_EKSA_CLUSTER}.tf
-
Configure the Terraform cluster resource definition generated in step 2
- Set
metadata.generation
as a computed field
. Add the following to your cluster resource configuration
computed_fields = ["metadata.generated"]
field_manager {
force_conflicts = true
}
- Add the
namespace
to the metadata
of the cluster
- Remove the
generation
field from the metadata
of the cluster
- Your Terraform cluster resource should look similar to this:
computed_fields = ["metadata.generated"]
field_manager {
force_conflicts = true
}
manifest = {
"apiVersion" = "anywhere.eks.amazonaws.com/v1alpha1"
"kind" = "Cluster"
"metadata" = {
"name" = "MyClusterName"
"namespace" = "default"
}
-
Import your EKS Anywhere cluster into terraform state:
terraform init
terraform import kubernetes_manifest.cluster_${MY_EKSA_CLUSTER} "apiVersion=anywhere.eks.amazonaws.com/v1alpha1,kind=Cluster,namespace=default,name=${MY_EKSA_CLUSTER}"
After you import
your cluster, you will need to run terraform apply
one time to ensure that the manifest
field of your cluster resource is in-sync.
This will not change the state of your cluster, but is a required step after the initial import.
The manifest
field stores the contents of the associated kubernetes manifest, while the object
field stores the actual state of the resource.
-
Modify Your Cluster using Terraform
- Modify the
count
value of one of your workerNodeGroupConfigurations
, or another mutable field, in the configuration stored in ${MY_EKSA_CLUSTER}.tf
file.
- Check the expected diff between your cluster state and the modified local state via
terraform plan
You should see in the output that the worker node group configuration count field (or whichever field you chose to modify) will be modified by Terraform.
-
Now, actually change your cluster to match the local configuration:
-
Observe the change to your cluster. For example:
Follow these steps if you want to use your initial cluster to create and manage separate workload clusters via Terraform.
NOTE: If you choose to manage your cluster using Terraform, do not use kubectl
to edit your cluster objects as this can lead to field manager conflicts.
Prerequisites
- An existing EKS Anywhere cluster imported into Terraform state.
If your existing cluster is not yet imported, see this guide.
.
- A cluster configuration file for your new workload cluster.
-
Create the new cluster configuration Terraform file.
tfk8s -f new-workload-cluster.yaml -o new-workload-cluster.tf
NOTE: Specify the namespace
for all EKS Anywhere objects when you are using Terraform to manage your clusters (even for the default
namespace, use "namespace" = "default"
on those objects).
Ensure workload cluster object names are distinct from management cluster object names. Be sure to set the managementCluster
field to identify the name of the management cluster.
-
Ensure that this new Terraform workload cluster configuration exists in the same directory as the management cluster Terraform files.
my/terraform/config/path
├── management-cluster.tf
├── new-workload-cluster.tf
├── provider.tf
├── ...
└──
-
Verify the changes to be applied:
-
If the plan looks as expected, apply those changes to create the new cluster resources:
-
You can list the workload clusters managed by the management cluster.
export KUBECONFIG=${PWD}/${MGMT_CLUSTER_NAME}/${MGMT_CLUSTER_NAME}-eks-a-cluster.kubeconfig
kubectl get clusters
-
The kubeconfig for your new cluster is stored as a secret on the management cluster.
You can get the workload cluster credentials and run the test application on your new workload cluster as follows:
kubectl get secret -n eksa-system w01-kubeconfig -o jsonpath=‘{.data.value}' | base64 --decode > w01.kubeconfig
export KUBECONFIG=w01.kubeconfig
kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
- To upgrade a workload cluster using Terraform, modify the desired fields in the Terraform resource file and apply the changes.
- To delete a workload cluster using Terraform, you will need the name of the Terraform cluster resource.
This can be found on the first line of your cluster resource definition.
terraform destroy --target kubernetes_manifest.cluster_w01
Appendix
Terraform K8s Provider https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs
tfk8s https://github.com/jrhouston/tfk8s
4.2.14 - Delete cluster
How to delete an EKS Anywhere cluster
NOTE: EKS Anywhere Bare Metal clusters do not yet support separate workload and management clusters. Use the instructions for Deleting a management cluster to delete a Bare Metal cluster.
Deleting a workload cluster
Follow these steps to delete your EKS Anywhere cluster that is managed by a separate management cluster.
To delete a workload cluster, you will need:
- name of your workload cluster
- kubeconfig of your workload cluster
- kubeconfig of your management cluster
Run the following commands to delete the cluster:
-
Set up CLUSTER_NAME
and KUBECONFIG
environment variables:
export CLUSTER_NAME=eksa-w01-cluster
export KUBECONFIG=${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
export MANAGEMENT_KUBECONFIG=<path-to-management-cluster-kubeconfig>
-
Run the delete command:
Deleting a management cluster
Follow these steps to delete your management cluster.
To delete a cluster you will need:
- cluster name or cluster configuration
- kubeconfig of your cluster
Run the following commands to delete the cluster:
-
Set up CLUSTER_NAME
and KUBECONFIG
environment variables:
export CLUSTER_NAME=mgmt
export KUBECONFIG=${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
-
Run the delete command:
-
If you are running the delete command from the directory which has the cluster folder with ${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.yaml
:
eksctl anywhere delete cluster ${CLUSTER_NAME}
-
Otherwise, use this command to manually specify the clusterconfig file path:
export CONFIG_FILE=<path-to-config-file>
eksctl anywhere delete cluster -f ${CONFIG_FILE}
Example output:
Performing provider setup and validations
Creating management cluster
Installing cluster-api providers on management cluster
Moving cluster management from workload cluster
Deleting workload cluster
Clean up Git Repo
GitOps field not specified, clean up git repo skipped
🎉 Cluster deleted!
For vSphere, CloudStack, and Nutanix, this will delete all of the VMs that were created in your provider.
For Bare Metal, the servers will be powered off if BMC information has been provided.
If your workloads created external resources such as external DNS entries or load balancer endpoints you may need to delete those resources manually.
4.3 - Cluster troubleshooting
Troubleshooting your EKS Anywhere Cluster
4.3.1 - Troubleshooting
Troubleshooting EKS Anywhere clusters
This guide covers EKS Anywhere troubleshooting. It is divided into the following sections:
You may want to search this document for a fragment of the error you are seeing.
General troubleshooting
Increase eksctl anywhere output
If you’re having trouble running eksctl anywhere
you may get more verbose output with the -v 6
option. The highest level of verbosity is -v 9
and the default level of logging is level equivalent to -v 0
.
Cannot run docker commands
The EKS Anywhere binary requires access to run docker commands without using sudo
.
If you’re using a Linux distribution you will need to be using Docker 20.x.x add your user needs to be part of the docker group.
To add your user to the docker group you can use.
sudo usermod -a -G docker $USER
Now you need to log out and back in to get the new group permissions.
Minimum requirements for docker version have not been met
Error: failed to validate docker: minimum requirements for docker version have not been met. Install Docker version 20.x.x or above
Ensure you are running Docker 20.x.x for example:
% docker --version
Docker version 20.10.6, build 370c289
Minimum requirements for docker version have not been met on Mac OS
Error: EKS Anywhere does not support Docker desktop versions between 4.3.0 and 4.4.1 on macOS
Error: EKS Anywhere requires Docker desktop to be configured to use CGroups v1. Please set `deprecatedCgroupv1:true` in your `~/Library/Group\\ Containers/group.com.docker/settings.json` file
Ensure you are running Docker Desktop 4.4.2 or newer and have set "deprecatedCgroupv1": true
in your settings.json file
% defaults read /Applications/Docker.app/Contents/Info.plist CFBundleShortVersionString
4.42
% docker info --format '{{json .CgroupVersion}}'
"1"
cgroups v2 is not supported in Ubuntu 21.10+ and 22.04
ERROR: failed to create cluster: could not find a log line that matches "Reached target .*Multi-User System.*|detected cgroup v1"
It is recommended to use Ubuntu 20.04 for the Administrative Machine. This is because the EKS Anywhere Bootstrap cluster requires cgroups v1. Since Ubuntu 21.10 cgroups v2 is enabled by default. You can use Ubuntu 21.10 and 22.04 for the Administrative machine if you configure Ubuntu to use cgroups v1 instead.
To verify cgroups version
% docker info | grep Cgroup
Cgroup Driver: cgroupfs
Cgroup Version: 2
To use cgroups v1 you need to sudo and edit /etc/default/grub to set GRUB_CMDLINE_LINUX to “systemd.unified_cgroup_hierarchy=0” and reboot.
%sudo <editor> /etc/default/grub
GRUB_CMDLINE_LINUX="systemd.unified_cgroup_hierarchy=0"
sudo update-grub
sudo reboot now
Then verify you are using cgroups v1.
% docker info | grep Cgroup
Cgroup Driver: cgroupfs
Cgroup Version: 1
ECR access denied
Error: failed to create cluster: unable to initialize executables: failed to setup eks-a dependencies: Error response from daemon: pull access denied for public.ecr.aws/***/cli-tools, repository does not exist or may require 'docker login': denied: Your authorization token has expired. Reauthenticate and try again.
All images needed for EKS Anywhere are public and do not need authentication. Old cached credentials could trigger this error.
Remove cached credentials by running:
docker logout public.ecr.aws
error unmarshaling JSON: while decoding JSON: json: unknown field “spec”
Error: loading config file "cluster.yaml": error unmarshaling JSON: while decoding JSON: json: unknown field "spec"
Use eksctl anywhere create cluster -f cluster.yaml
instead of eksctl create cluster -f cluster.yaml
to create an EKS Anywhere cluster.
Error: old cluster config file exists under my-cluster, please use a different clusterName to proceed
Error: old cluster config file exists under my-cluster, please use a different clusterName to proceed
The my-cluster
directory already exists in the current directory.
Either use a different cluster name or move the directory.
failed to create cluster: node(s) already exist for a cluster with the name
Performing provider setup and validations
Creating new bootstrap cluster
Error create bootstrapcluster {"error": "error creating bootstrap cluster: error executing create cluster: ERROR: failed to create cluster: node(s) already exist for a cluster with the name \"cluster-name\"\n, try rerunning with --force-cleanup to force delete previously created bootstrap cluster"}
Failed to create cluster {"error": "error creating bootstrap cluster: error executing create cluster: ERROR: failed to create cluster: node(s) already exist for a cluster with the name \"cluster-name\"\n, try rerunning with --force-cleanup to force delete previously created bootstrap cluster"}ry rerunning with --force-cleanup to force delete previously created bootstrap cluster"}
A bootstrap cluster already exists with the same name. If you are sure the cluster is not being used, you may use the --force-cleanup
option to eksctl anywhere
to delete the cluster or you may delete the cluster with kind delete cluster --name <cluster-name>
. If you do not have kind
installed, you may use docker stop
to stop the docker container running the KinD cluster.
Memory or disk resource problem
There are various disk and memory issues that can cause problems.
Make sure docker is configured with enough memory.
Make sure the system wide Docker memory configuration provides enough RAM for the bootstrap cluster.
Make sure you do not have unneeded KinD clusters running kind get clusters
.
You may want to delete unneeded clusters with kind delete cluster --name <cluster-name>
.
If you do not have kind installed, you may install it from https://kind.sigs.k8s.io/ or use docker ps
to see the KinD clusters and docker stop
to stop the cluster.
Make sure you do not have any unneeded Docker containers running with docker ps
.
Terminate any unneeded Docker containers.
Make sure Docker isn’t out of disk resources.
If you don’t have any other docker containers running you may want to run docker system prune
to clean up disk space.
You may want to restart Docker.
To restart Docker on Ubuntu sudo systemctl restart docker
.
Waiting for cert-manager to be available… Error: timed out waiting for the condition
Failed to create cluster {"error": "error initializing capi resources in cluster: error executing init: Fetching providers\nInstalling cert-manager Version=\"v1.1.0\"\nWaiting for cert-manager to be available...\nError: timed out waiting for the condition\n"}
This is likely a Memory or disk resource problem
.
You can also try using techniques from Generic cluster unavailable
.
NTP Time sync issues
level=error msg=k8sError error="github.com/cilium/cilium/pkg/k8s/watchers/endpoint_slice.go:91: Failed to watch *v1beta1.EndpointSlice: failed to list *v1beta1.EndpointSlice: Unauthorized" subsys=k8s
You might notice authorization errors if the timestamps on your EKS Anywhere control plane nodes and worker nodes are out-of-sync. Please ensure that all the nodes are configured with same healthy NTP servers to avoid out-of-sync issues.
Error running bootstrapper cmd: error joining as worker: Error waiting for worker join files: Kubeadm join kubelet-start killed after timeout
You might also notice that the joining of nodes will fail if your admin machine differs in time compared to your nodes. Make sure to check the server time matches between the two as well.
The connection to the server localhost:8080 was refused
Performing provider setup and validations
Creating new bootstrap cluster
Installing cluster-api providers on bootstrap cluster
Error initializing capi in bootstrap cluster {"error": "error waiting for capi-kubeadm-control-plane-controller-manager in namespace capi-kubeadm-control-plane-system: error executing wait: The connection to the server localhost:8080 was refused - did you specify the right host or port?\n"}
Failed to create cluster {"error": "error waiting for capi-kubeadm-control-plane-controller-manager in namespace capi-kubeadm-control-plane-system: error executing wait: The connection to the server localhost:8080 was refused - did you specify the right host or port?\n"}
This is likely a Memory or disk resource problem
.
Generic cluster unavailable
Troubleshoot more by inspecting bootstrap cluster or workload cluster (depending on the stage of failure) using kubectl commands.
kubectl get pods -A --kubeconfig=<kubeconfig>
kubectl get nodes -A --kubeconfig=<kubeconfig>
kubectl get logs <podname> -n <namespace> --kubeconfig=<kubeconfig>
....
Capv troubleshooting guide: https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/blob/master/docs/troubleshooting.md#debugging-issues
Bootstrap cluster fails to come up
If your bootstrap cluster has problems you may get detailed logs by looking at the files created under the ${CLUSTER_NAME}/logs
folder. The capv-controller-manager log file will surface issues with vsphere specific configuration while the capi-controller-manager log file might surface other generic issues with the cluster configuration passed in.
You may also access the logs from your bootstrap cluster directly as below:
export KUBECONFIG=${PWD}/${CLUSTER_NAME}/generated/${CLUSTER_NAME}.kind.kubeconfig
kubectl logs -f -n capv-system -l control-plane="controller-manager" -c manager
It also might be useful to start a shell session on the docker container running the bootstrap cluster by running docker ps
and then docker exec -it <container-id> bash
the kind container.
Bootstrap cluster fails to come up
Error: creating bootstrap cluster: executing create cluster: ERROR: failed to create cluster: node(s) already exist for a cluster with the name \"cluster-name\"
, try rerunning with —force-cleanup to force delete previously created bootstrap cluster
Cluster creation fails because a cluster of the same name already exists.
Try running the eksctl anywhere create cluster
again, adding the --force-cleanup
option.
If that doesn’t work, you can manually delete the old cluster:
kind delete cluster --name cluster-name
Cluster upgrade fails with management cluster on bootstrap cluster
If a cluster upgrade of a management (or self managed) cluster fails or is halted in the middle, you may be left in a
state where the management resources (CAPI) are still on the KinD bootstrap cluster on the Admin machine. Right now, you will have to
manually restore the management resources from the backup directory to the management cluster.
Find the backup directory:
From EKS-A v0.14.4 onwards, the CAPI records are persisted to the ${CLUSTER_NAME}/cluster-state-backup- directory location
on the eksctl CLI host, where the ${CLUSTER_NAME} is the name of the management cluster.
Restore the management cluster:
MGMTKUBE=${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
DIRECTORY=${CLUSTER_NAME}/cluster-state-backup-<date>
docker run -i --network host -w $(pwd) -v /var/run/docker.sock:/var/run/docker.sock -v $(pwd):/$(pwd) --entrypoint clusterctl ${CONTAINER} restore \
--directory ${DIRECTORY} \
--namespace eksa-system \
--kubeconfig ${MGMTKUBE}
Delete the kind cluster and run the upgrade command again.
Creating new workload cluster hangs or fails
Cluster creation appears to be hung waiting for the Control Plane to be ready.
If the CLI is hung on this message for over 30 mins, something likely failed during the OS provisioning:
Waiting for Control Plane to be ready
Or if cluster creation times out on this step and fails with the following messages:
Support bundle archive created {"path": "support-bundle-2022-06-28T00_41_24.tar.gz"}
Analyzing support bundle {"bundle": "CLUSTER_NAME/generated/bootstrap-cluster-2022-06-28T00:41:24Z-bundle.yaml", "archive": "support-bundle-2022-06-28T00_41_24.tar.gz"}
Analysis output generated {"path": "CLUSTER_NAME/generated/bootstrap-cluster-2022-06-28T00:43:40Z-analysis.yaml"}
collecting workload cluster diagnostics
Error: waiting for workload cluster control plane to be ready: executing wait: error: timed out waiting for the condition on clusters/CLUSTER_NAME
In either of those cases, the following steps can help you determine the problem:
-
Export the kind cluster’s kubeconfig file:
export KUBECONFIG=${PWD}/${CLUSTER_NAME}/generated/${CLUSTER_NAME}.kind.kubeconfig
-
If you have provided BMC information:
-
Check all of the machines that the EKS Anywhere CLI has picked up from the pool of hardware in the CSV file:
kubectl get machines.bmc -A
-
Check if those nodes are powered on. If any of those nodes are not powered on after a while then it could be possible that BMC credentials are invalid. You can verify it by checking the logs:
kubectl get tasks.bmc -n eksa-system
kubectl get tasks.bmc <bmc-name> -n eksa-system -o yaml
Validate BMC credentials are correct if a connection error is observed on the tasks.bmc
resource. Note that “IPMI over LAN” or “Redfish” must be enabled in the BMC configuration for the tasks.bmc
resource to communicate successfully.
-
If the machine is powered on but you see linuxkit is not running, then Tinkerbell failed to serve the node via iPXE. In this case, you would want to:
-
Check the Boots service logs from the machine where you are running the CLI to see if it received and/or responded to the request:
-
Confirm no other DHCP service responded to the request and check for any errors in the BMC console. Other DHCP servers on the network can result in race conditions and should be avoided by configuring the other server to block all MAC addresses and exclude all IP addresses used by EKS Anywhere.
-
If you see Welcome to LinuxKit
, click enter in the BMC console to access the LinuxKit terminal. Run the following commands to check if the tink-worker container is running.
docker ps -a
docker logs <container-id>
-
If the machine has already started provisioning the OS and it’s in irrecoverable state, get the workflow of the provisioning/provisioned machine using:
kubectl get workflows -n eksa-system
kubectl describe workflow/<workflow-name> -n eksa-system
Check all the actions and their status to determine if all actions have been executed successfully or not. If the stream-image has action failed, it’s likely due to a timeout or network related issue. You can also provide your own image_url
by specifying osImageURL
under datacenter spec.
vSphere troubleshooting
EKSA_VSPHERE_USERNAME is not set or is empty
❌ Validation failed {"validation": "vsphere Provider setup is valid", "error": "failed setup and validations: EKSA_VSPHERE_USERNAME is not set or is empty", "remediation": ""}
Two environment variables need to be set and exported in your environment to create clusters successfully.
Be sure to use single quotes around your user name and password to avoid shell manipulation of these values.
export EKSA_VSPHERE_USERNAME='<vSphere-username>'
export EKSA_VSPHERE_PASSWORD='<vSphere-password>'
vSphere authentication failed
❌ Validation failed {"validation": "vsphere Provider setup is valid", "error": "error validating vCenter setup: vSphere authentication failed: govc: ServerFaultCode: Cannot complete login due to an incorrect user name or password.\n", "remediation": ""}
Error: failed to create cluster: validations failed
Two environment variables need to be set and exported in your environment to create clusters successfully.
Be sure to use single quotes around your user name and password to avoid shell manipulation of these values.
export EKSA_VSPHERE_USERNAME='<vSphere-username>'
export EKSA_VSPHERE_PASSWORD='<vSphere-password>'
Issues detected with selected template
Issues detected with selected template. Details: - -1:-1:VALUE_ILLEGAL: No supported hardware versions among [vmx-15]; supported: [vmx-04, vmx-07, vmx-08, vmx-09, vmx-10, vmx-11, vmx-12, vmx-13].
Our upstream dependency on CAPV makes it a requirement that you use vSphere 6.7 update 3 or newer.
Make sure your ESXi hosts are also up to date.
Waiting for external etcd to be ready
2022-01-19T15:56:57.734Z V3 Waiting for external etcd to be ready {"cluster": "mgmt"}
Debug this problem using techniques from Generic cluster unavailable
.
Timed out waiting for the condition on deployments/capv-controller-manager
Failed to create cluster {"error": "error initializing capi in bootstrap cluster: error waiting for capv-controller-manager in namespace capv-system: error executing wait: error: timed out waiting for the condition on deployments/capv-controller-manager\n"}
Debug this problem using techniques from Generic cluster unavailable
.
Timed out waiting for the condition on clusters/
Failed to create cluster {"error": "error waiting for workload cluster control plane to be ready: error executing wait: error: timed out waiting for the condition on clusters/test-cluster\n"}
This can be an issue with the number of control plane and worker node replicas defined in your cluster yaml file.
Try to start off with a smaller number (3 or 5 is recommended for control plane) in order to bring up the cluster.
This error can also occur because your vCenter server is using self-signed certificates and you have insecure
set to true in the generated cluster yaml.
To check if this is the case, run the commands below:
export KUBECONFIG=${PWD}/${CLUSTER_NAME}/generated/${CLUSTER_NAME}.kind.kubeconfig
kubectl get machines
If all the machines are in Provisioning
phase, this is most likely the issue.
To resolve the issue, set insecure
to false
and thumbprint
to the TLS thumbprint of your vCenter server in the cluster yaml and try again.
"msg"="discovered IP address"
The aforementioned log message can also appear with an address value of the control plane in either of the ${CLUSTER_NAME}/logs/capv-controller-manager.log file
or the capv-controller-manager pod log which can be extracted with the following command,
export KUBECONFIG=${PWD}/${CLUSTER_NAME}/generated/${CLUSTER_NAME}.kind.kubeconfig
kubectl logs -f -n capv-system -l control-plane="controller-manager" -c manager
Make sure you are choosing an ip in your network range that does not conflict with other VMs.
https://anywhere.eks.amazonaws.com/docs/reference/clusterspec/vsphere/#controlplaneconfigurationendpointhost-required
Generic cluster unavailable
The first thing to look at is: were virtual machines created on your target provider? In the case of vSphere, you should see some VMs in your folder and they should be up. Check the console and if you see:
[FAILED] Failed to start Wait for Network to be Configured.
Make sure your DHCP server is up and working.
Workload VM is created on vSphere but can not power on
A similar issue is the VM does power on but does not show any logs on the console and does not have any IPs assigned.
This issue can occur if the resourcePool
that the VM uses does not have enough CPU or memory resources to run a VM.
To resolve this issue, increase the CPU and/or memory reservations or limits for the resourcePool.
Workload VMs start but Kubernetes not working properly
If the workload VMs start, but Kubernetes does not start or is not working properly, you may want to log onto the VMs and check the logs there.
If Kubernetes is at least partially working, you may use kubectl
to get the IPs of the nodes:
kubectl get nodes -o=custom-columns="NAME:.metadata.name,IP:.status.addresses[2].address"
If Kubernetes is not working at all, you can get the IPs of the VMs from vCenter or using govc
.
When you get the external IP you can ssh into the nodes using the private ssh key associated with the public ssh key you provided in your cluster configuration:
ssh -i <ssh-private-key> <ssh-username>@<external-IP>
create command stuck on Creating new workload cluster
There can we a few reasons if the create command is stuck on Creating new workload cluster
for over 30 min.
First, check the vSphere UI to see if any workload VM are created.
If any VMs are created, check to see if they have any IPv4 IPs assigned to them.
If there are no IPv4 IPs assigned to them, this is most likely because you don’t have a DHCP server configured for the network
configured in the cluster config yaml.
Ensure that you have DHCP running and run the create command again.
If there are any IPv4 IPs assigned, check if one of the VMs have the controlPlane IP specified in Cluster.spec.controlPlaneConfiguration.endpoint.host
in the clusterconfig yaml.
If this IP is not present on any control plane VM, make sure the network
has access to the following endpoints:
- vCenter endpoint (must be accessible to EKS Anywhere clusters)
- public.ecr.aws
- anywhere-assets.eks.amazonaws.com (to download the EKS Anywhere binaries, manifests and OVAs)
- distro.eks.amazonaws.com (to download EKS Distro binaries and manifests)
- d2glxqk2uabbnd.cloudfront.net (for EKS Anywhere and EKS Distro ECR container images)
- api.ecr.us-west-2.amazonaws.com (for EKS Anywhere package authentication matching your region)
- d5l0dvt14r5h8.cloudfront.net (for EKS Anywhere package ECR container images)
- api.github.com (only if GitOps is enabled)
If the IPv4 IPs are assigned to the VM and you have the workload kubeconfig under <cluster-name>/<cluster-name>-eks-a-cluster.kubeconfig
, you can use it to check vsphere-cloud-controller-manager
logs.
kubectl logs -n kube-system vsphere-cloud-controller-manager-<xxxxx> --kubeconfig <cluster-name>/<cluster-name>-eks-a-cluster.kubeconfig
If you see this message in the logs, it means your cluster nodes do not have access to vSphere, which is required for cluster to get to a ready state.
Failed to connect to <vSphere-FQDN>: connection refused
In this case, you need to enable inbound traffic from your cluster nodes on your vCenter’s management network.
If VMs are created, but they do not get a network connection and DHCP is not configured for your vSphere deployment, you may need to create your own DHCP server
.
If no VMs are created, check the capi-controller-manager
, capv-controller-manager
and capi-kubeadm-control-plane-controller-manager
logs using the commands mentioned in Generic cluster unavailable
section.
Cluster Deletion Fails
If cluster deletion fails, you may need to manually delete the VMs associated with the cluster.
The VMs should be named with the cluster name.
You can power off and delete from disk using the vCenter web user interface.
You may also use govc
:
govc find -type VirtualMachine --name '<cluster-name>*'
This will give you a list of virtual machines that should be associated with your cluster.
For each of the VMs you want to delete run:
VM_NAME=vm-to-destroy
govc vm.power -off -force $VM_NAME
govc object.destroy $VM_NAME
Troubleshooting GitOps integration
Failed cluster creation can sometimes leave behind cluster configuration files committed to your GitHub.com repository.
Make sure to delete these configuration files before you re-try eksctl anywhere create cluster
.
If these configuration files are not deleted, GitOps installation will fail but cluster creation will continue.
They’ll generally be located under the directory
clusters/$CLUSTER_NAME
if you used the default path in your flux
gitops
config.
Delete the entire directory named $CLUSTER_NAME.
Failed cluster creation can sometimes leave behind a completely empty GitHub.com repository.
This can cause the GitOps installation to fail if you re-try the creation of a cluster which uses this repository.
If cluster creation failure leaves behind an empty github repository, please manually delete the created GitHub.com repository before attempting cluster creation again.
Changes not syncing to cluster
Please remember that the only fields currently supported for GitOps are:
Cluster
Cluster.workerNodeGroupConfigurations.count
Cluster.workerNodeGroupConfigurations.machineGroupRef.name
Worker Nodes
VsphereMachineConfig.diskGiB
VsphereMachineConfig.numCPUs
VsphereMachineConfig.memoryMiB
VsphereMachineConfig.template
VsphereMachineConfig.datastore
VsphereMachineConfig.folder
VsphereMachineConfig.resourcePool
If you’ve changed these fields and they’re not syncing to the cluster as you’d expect,
check out the logs of the pod in the source-controller
deployment in the flux-system
namespaces.
If flux
is having a problem connecting to your GitHub repository the problem will be logged here.
$ kubectl get pods -n flux-system
NAME READY STATUS RESTARTS AGE
helm-controller-7d644b8547-k8wfs 1/1 Running 0 4h15m
kustomize-controller-7cf5875f54-hs2bt 1/1 Running 0 4h15m
notification-controller-776f7d68f4-v22kp 1/1 Running 0 4h15m
source-controller-7c4555748d-7c7zb 1/1 Running 0 4h15m
$ kubectl logs source-controller-7c4555748d-7c7zb -n flux-system
A well behaved flux pod will simply log the ongoing reconciliation process, like so:
{"level":"info","ts":"2021-07-01T19:58:51.076Z","logger":"controller.gitrepository","msg":"Reconciliation finished in 902.725344ms, next run in 1m0s","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"GitRepository","name":"flux-system","namespace":"flux-system"}
{"level":"info","ts":"2021-07-01T19:59:52.012Z","logger":"controller.gitrepository","msg":"Reconciliation finished in 935.016754ms, next run in 1m0s","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"GitRepository","name":"flux-system","namespace":"flux-system"}
{"level":"info","ts":"2021-07-01T20:00:52.982Z","logger":"controller.gitrepository","msg":"Reconciliation finished in 970.03174ms, next run in 1m0s","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"GitRepository","name":"flux-system","namespace":"flux-system"}
If there are issues connecting to GitHub, you’ll instead see exceptions in the source-controller
log stream.
For example, if the deploy key used by flux
has been deleted, you’d see something like this:
{"level":"error","ts":"2021-07-01T20:04:56.335Z","logger":"controller.gitrepository","msg":"Reconciler error","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"GitRepository","name":"flux-system","namespace":"flux-system","error":"unable to clone 'ssh://git@github.com/youruser/gitops-vsphere-test', error: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain"}
Other ways to troubleshoot GitOps integration
If you’re still having problems after deleting any empty EKS Anywhere created GitHub repositories and looking at the source-controller
logs.
You can look for additional issues by checking out the deployments in the flux-system
and eksa-system
namespaces and ensure they’re running and their log streams are free from exceptions.
$ kubectl get deployments -n flux-system
NAME READY UP-TO-DATE AVAILABLE AGE
helm-controller 1/1 1 1 4h13m
kustomize-controller 1/1 1 1 4h13m
notification-controller 1/1 1 1 4h13m
source-controller 1/1 1 1 4h13m
$ kubectl get deployments -n eksa-system
NAME READY UP-TO-DATE AVAILABLE AGE
eksa-controller-manager 1/1 1 1 4h13m
Snow troubleshooting
Device outage
These are some conditions that can cause a device outage:
- Intentional outage (a planned power outage or an outage when moving devices, for example).
- Unintentional outage (a subset of devices or all devices are rebooted, or experiencing network disconnections from the LAN, which make device offline or isolated from the cluster).
NOTE: If all Snowball Edge devices are moved to a different place and connected to a different local network, make sure you use the same subnet, netmask, and gateway for your network configuration. After moving, devices and all node instances need to maintain the original IP addresses. Then, follow the recover cluster procedure to get your cluster up and running again. Otherwise, it might be impossible to resume the cluster.
To recover a cluster
If there is a subset of devices or all devices experience an outage, see Downloading and Installing the Snowball Edge client
to get the Snowball Edge client and then follow these steps:
-
Reboot and unlock all affected devices manually.
// use reboot-device command to reboot device, this may take several minutes
$ path-to-snowballEdge_CLIENT reboot-device --endpoint https://snowball-ip --manifest-file path-to-manifest-file --unlock-code unlock-code
// use describe-device command to check the status of device
$ path-to-snowballEdge_CLIENT describe-device --endpoint https://snowball-ip --manifest-file path-to-manifest-file --unlock-code unlock-code
// when the State in the output of describe-device is LOCKED, run unlock-device
$ path-to-snowballEdge_CLIENT unlock-device --endpoint https://snowball-ip --manifest-file path-to-manifest-file --unlock-code unlock-code
// use describe-device command to check the status of device until device is unlocked
$ path-to-snowballEdge_CLIENT describe-device --endpoint https://snowball-ip --manifest-file path-to-manifest-file --unlock-code unlock-code
-
Get all instance IDs that were part of the cluster by looking up the impacted device IP in the PROVIDERID column.
$ kubectl get machines -A --kubeconfig=cluster-name/cluster-name-eks-a-cluster.kubeconfig
NAMESPACE NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION
eksa-system machine-name-1 cluster-name node-name-1 aws-snow:///192.168.1.39/s.i-8319d8c75d54a32cc Running 82s v1.24.9-eks-1-24-7
eksa-system machine-name-2 cluster-name node-name-2 aws-snow:///192.168.1.39/s.i-8d7d3679a1713e403 Running 82s v1.24.9-eks-1-24-7
eksa-system machine-name-3 cluster-name node-name-3 aws-snow:///192.168.1.231/s.i-8201c356fb369c37f Running 81s v1.24.9-eks-1-24-7
eksa-system machine-name-4 cluster-name node-name-4 aws-snow:///192.168.1.39/s.i-88597731b5a4a9044 Running 81s v1.24.9-eks-1-24-7
eksa-system machine-name-5 cluster-name node-name-5 aws-snow:///192.168.1.77/s.i-822f0f46267ad4c6e Running 81s v1.24.9-eks-1-24-7
-
Start all instances on the impacted devices as soon as possible.
$ aws ec2 start-instances --instance-id instance-id-1 instance-id-2 ... --endpoint http://snowball-ip:6078 --profile profile-name
-
Check the balance status of the current cluster after the cluster is ready again.
$ kubectl get machines -A --kubeconfig=cluster-name/cluster-name-eks-a-cluster.kubeconfig
-
Check if you have unstacked etcd machines.
-
If you have unstacked etcd machines, check the provision of unstacked etcd machines. You can find the device IP in the PROVIDERID
column.
- If there are more than 1 unstacked etcd machines provisioned on the same device and there are devices with no unstacked etcd machine, you need to rebalance unstacked etcd nodes. Follow the rebalance nodes
procedure to rebalance your unstacked etcd nodes in order to recover high availability.
- If you have your etcd nodes evenly distributed with 1 device having at most 1 etcd node, you are done with the recovery.
-
If you don’t have unstacked etcd machines, check the provision of control plane machines. You can find the device IP in PROVIDERID
column.
- If there are more than 1 control plan machines provisioned on the same device and there are devices with no control plane machine, you need to rebalance control plane nodes. Follow the rebalance nodes
procedure to rebalance your control plane nodes in order to recover high availability.
- If you have your control plane nodes evenly distributed with 1 device having at most 1 control plane node, you are done with the recovery.
How to rebalance nodes
-
Confirm the machines you want to delete and get their node name from the NODENAME column.
You can determine which machines need to be deleted by referring to the AGE column. The newly-generated machines have short AGE. Delete those new etcd/control plane machine nodes which are not the only etcd/control plane machine nodes on their devices.
-
Cordon each node so no further workloads are scheduled to run on it.
$ kubectl cordon node-name --ignore-daemonsets --kubeconfig=cluster-name/cluster-name-eks-a-cluster.kubeconfig
-
Drain machine nodes of all current workloads.
$ kubectl drain node-name --ignore-daemonsets --kubeconfig=cluster-name/cluster-name-eks-a-cluster.kubeconfig
-
Delete machine node.
$ kubectl delete node node-name --kubeconfig=cluster-name/cluster-name-eks-a-cluster.kubeconfig
-
Repeat this process until etcd/control plane machine nodes are evenly provisioned.
Device replacement
There might be some reasons which can require device replacement:
- When a subset of devices are determined to be broken and you want to join a new device into current cluster.
- When a subset of devices are offline and come back with a new device IP.
To upgrade a cluster with new devices:
-
Add new certificates to the certificate file and new credentials to the credential file.
-
Change the device list in your cluster yaml configuration file and use the eksctl anywhere upgrade cluster
command.
$ eksctl anywhere upgrade cluster -f eks-a-cluster.yaml
Node outage
Unintentional instance outage
When an instance is in exception status (for example, terminated/stopped for some reason), it will be discovered automatically by Amazon EKS Anywhere and there will be a new replacement instance node created after 5 minutes. The new node will be provisioned to devices based on even provision strategy. In this case, the new node will be provisioned to a device with the fewest number of machines of the same type. Sometimes, more than one device will have the same number of machines of this type. Thus, we cannot guarantee it will be provisioned on the original device.
Intentional node replacement
If you want to replace an unhealthy node which didn’t get detected by Amazon EKS Anywhere automatically, you can follow these steps.
NOTE: Do not delete all worker machine nodes or control plane nodes or etcd nodes at the same time. Make sure you delete machine nodes one by one.
-
Cordon nodes so no further workloads are scheduled to run on it.
$ kubectl cordon node-name --ignore-daemonsets --kubeconfig=cluster-name/cluster-name-eks-a-cluster.kubeconfig
-
Drain machine nodes of all current workloads.
$ kubectl drain node-name --ignore-daemonsets --kubeconfig=cluster-name/cluster-name-eks-a-cluster.kubeconfig
-
Delete machine nodes.
$ kubectl delete node node-name --kubeconfig=cluster-name/cluster-name-eks-a-cluster.kubeconfig
-
New nodes will be provisioned automatically. You can check the provision result with the get machines command.
$ kubectl get machines -A --kubeconfig=cluster-name/cluster-name-eks-a-cluster.kubeconfig
Cluster Deletion Fails
If your Amazon EKS Anywhere cluster creation failed and the eksctl anywhere delete cluster -f eksa-cluster.yaml
command cannot be run successfully, manually delete a few resources before trying the command again. Run the following commands from the computer on which you set up the AWS configuration and have the Snowball Edge Client installed
. If you are using multiple Snowball Edge devices, run these commands on each.
// get the list of instance ids that are created for Amazon EKS Anywhere cluster,
// that can be identified by cluster name in the tag of the output
$ aws ec2 describe-instances --endpoint http://snowball-ip:8008 --profile profile-name
// the next two commands are for deleting DNI, this needs to be done before deleting instance
$ PATH_TO_Snowball_Edge_CLIENT/bin/snowballEdge describe-direct-network-interfaces --endpoint https://snowball-ip --manifest-file path-to-manifest-file --unlock-code unlock-code
// DNI arn can be found in the output of last command, which is associated with the specific instance id you get from describe-instances
$ PATH_TO_Snowball_Edge_CLIENT/bin/snowballEdge delete-direct-network-interface --direct-network-interface-arn DNI-ARN --endpoint https://snowball-ip --manifest-file path-to-manifest-file --unlock-code unlock-code
// delete instance
$ aws ec2 terminate-instances --instance-id instance-id-1,instance-id-2 --endpoint http://snowball-ip:8008 --profile profile-name
Generate a log file from the Snowball Edge device
You can also generate a log file from the Snowball Edge device for AWS Support. See AWS Snowball Edge Logs
in this guide.
4.3.2 - Generating a Support Bundle
Using the Support Bundle with your EKS Anywhere Cluster
This guide covers the use of the EKS Anywhere Support Bundle for troubleshooting and support.
This allows you to gather cluster information, save it to your administrative machine, and perform analysis of the results.
EKS Anywhere leverages troubleshoot.sh
to collect
and analyze
kubernetes cluster logs,
cluster resource information, and other relevant debugging information.
EKS Anywhere has two Support Bundle commands:
eksctl anywhere generate support-bundle
will execute a support bundle on your cluster,
collecting relevant information, archiving it locally, and performing analysis of the results.
eksctl anywhere generate support-bundle-config
will generate a Support Bundle config yaml file for you to customize.
Do not add personally identifiable information (PII) or other confidential or sensitive information to your support bundle.
If you provide the support bundle to get support from AWS, it will be accessible to other AWS services, including AWS Support.
Collecting a Support Bundle and running analyzers
eksctl anywhere generate support-bundle
generate support-bundle
will allow you to quickly collect relevant logs and cluster resources and save them locally in an archive file.
This archive can then be used to aid in further troubleshooting and debugging.
If you provide a cluster configuration file containing your cluster spec using the -f
flag,
generate support-bundle
will customize the auto-generated support bundle collectors and analyzers
to match the state of your cluster.
If you provide a support bundle configuration file using the --bundle-config
flag,
for example one generated with generate support-bundle-config
,
generate support-bundle
will use the provided configuration when collecting information from your cluster and analyzing the results.
Flags:
--bundle-config string Bundle Config file to use when generating support bundle
-f, --filename string Filename that contains EKS-A cluster configuration
-h, --help help for support-bundle
--since string Collect pod logs in the latest duration like 5s, 2m, or 3h.
--since-time string Collect pod logs after a specific datetime(RFC3339) like 2021-06-28T15:04:05Z
-w, --w-config string Kubeconfig file to use when creating support bundle for a workload cluster
Collecting and analyzing a bundle
You only need to run a single command to generate a support bundle, collect information and analyze the output:
eksctl anywhere generate support-bundle -f myCluster.yaml
This command will collect the information from your cluster
and run an analysis of the collected information.
The collected information will be saved to your local disk in an archive which can be used for
debugging and obtaining additional in-depth support.
The analysis will be printed to your console.
Collect phase:
$ ./bin/eksctl anywhere generate support-bundle -f ./testcluster100.yaml
Collecting support bundle cluster-info
Collecting support bundle cluster-resources
Collecting support bundle secret
Collecting support bundle logs
Analyzing support bundle
Analysis phase:
Analyze Results
------------
Check PASS
Title: gitopsconfigs.anywhere.eks.amazonaws.com
Message: gitopsconfigs.anywhere.eks.amazonaws.com is present on the cluster
------------
Check PASS
Title: vspheredatacenterconfigs.anywhere.eks.amazonaws.com
Message: vspheredatacenterconfigs.anywhere.eks.amazonaws.com is present on the cluster
------------
Check PASS
Title: vspheremachineconfigs.anywhere.eks.amazonaws.com
Message: vspheremachineconfigs.anywhere.eks.amazonaws.com is present on the cluster
------------
Check PASS
Title: capv-controller-manager Status
Message: capv-controller-manager is running.
------------
Check PASS
Title: capv-controller-manager Status
Message: capv-controller-manager is running.
------------
Check PASS
Title: coredns Status
Message: coredns is running.
------------
Check PASS
Title: cert-manager-webhook Status
Message: cert-manager-webhook is running.
------------
Check PASS
Title: cert-manager-cainjector Status
Message: cert-manager-cainjector is running.
------------
Check PASS
Title: cert-manager Status
Message: cert-manager is running.
------------
Check PASS
Title: capi-kubeadm-control-plane-controller-manager Status
Message: capi-kubeadm-control-plane-controller-manager is running.
------------
Check PASS
Title: capi-kubeadm-bootstrap-controller-manager Status
Message: capi-kubeadm-bootstrap-controller-manager is running.
------------
Check PASS
Title: capi-controller-manager Status
Message: capi-controller-manager is running.
------------
Check PASS
Title: capi-controller-manager Status
Message: capi-controller-manager is running.
------------
Check PASS
Title: capi-kubeadm-control-plane-controller-manager Status
Message: capi-kubeadm-control-plane-controller-manager is running.
------------
Check PASS
Title: capi-kubeadm-control-plane-controller-manager Status
Message: capi-kubeadm-control-plane-controller-manager is running.
------------
Check PASS
Title: capi-kubeadm-bootstrap-controller-manager Status
Message: capi-kubeadm-bootstrap-controller-manager is running.
------------
Check PASS
Title: clusters.anywhere.eks.amazonaws.com
Message: clusters.anywhere.eks.amazonaws.com is present on the cluster
------------
Check PASS
Title: bundles.anywhere.eks.amazonaws.com
Message: bundles.anywhere.eks.amazonaws.com is present on the cluster
------------
Archive phase:
a support bundle has been created in the current directory: {"path": "support-bundle-2021-09-02T19_29_41.tar.gz"}
Generating a custom Support Bundle configuration for your EKS Anywhere Cluster
EKS Anywhere will automatically generate a support bundle based on your cluster configuration;
however, if you’d like to customize the support bundle to collect specific information,
you can generate your own support bundle configuration yaml for EKS Anywhere to run on your cluster.
eksctl anywhere generate support-bundle-config
will generate a default support bundle configuration and print it as yaml.
eksctl anywhere generate support-bundle-config -f myCluster.yaml
will generate a support bundle configuration customized to your cluster and print it as yaml.
To run a customized support bundle configuration yaml file on your cluster,
save this output to a file and run the command eksctl anywhere generate support-bundle
using the flag --bundle-config
.
eksctl anywhere generate support-bundle-config
Flags:
-f, --filename string Filename that contains EKS-A cluster configuration
-h, --help help for support-bundle-config
4.4 - EKS Anywhere curated package management
Common tasks for managing curated packages.
The main goal of EKS Anywhere curated packages is to make it easy to install, configure and maintain operational components in an EKS Anywhere cluster. EKS Anywhere curated packages offers to run secure and tested operational components on EKS Anywhere clusters. Please check out EKS Anywhere curated packages concepts
and EKS Anywhere curated packages configurations
for more details.
For proper curated package support, make sure the cluster kubernetes
version is v1.21
or above and eksctl anywhere
version is v0.11.0
or above (can be checked with the eksctl anywhere version
command). Amazon EKS Anywhere Curated Packages are only available to customers with the Amazon EKS Anywhere Enterprise Subscription. To request a free trial, talk to your Amazon representative or connect with one here
.
Setup authentication to use curated-packages
When you have been notified that your account has been given access to curated packages, create an IAM user in your account with a policy that only allows ECR read access to the Curated Packages repository; similar to this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ECRRead",
"Effect": "Allow",
"Action": [
"ecr:DescribeImageScanFindings",
"ecr:GetDownloadUrlForLayer",
"ecr:DescribeRegistry",
"ecr:DescribePullThroughCacheRules",
"ecr:DescribeImageReplicationStatus",
"ecr:ListTagsForResource",
"ecr:ListImages",
"ecr:BatchGetImage",
"ecr:DescribeImages",
"ecr:DescribeRepositories",
"ecr:BatchCheckLayerAvailability"
],
"Resource": "arn:aws:ecr:*:783794618700:repository/*"
},
{
"Sid": "ECRLogin",
"Effect": "Allow",
"Action": [
"ecr:GetAuthorizationToken"
],
"Resource": "*"
}
]
}
Note Curated Packages now supports pulling images from the following regions. Use the corresponding EKSA_AWS_REGION
prior to cluster creation to choose which region to pull form, if not set it will default to pull from us-west-2
.
"us-east-2",
"us-east-1",
"us-west-1",
"us-west-2",
"ap-northeast-3",
"ap-northeast-2",
"ap-southeast-1",
"ap-southeast-2",
"ap-northeast-1",
"ca-central-1",
"eu-central-1",
"eu-west-1",
"eu-west-2",
"eu-west-3",
"eu-north-1",
"sa-east-1"
Create credentials for this user and set and export the following environment variables:
export EKSA_AWS_ACCESS_KEY_ID="your*access*id"
export EKSA_AWS_SECRET_ACCESS_KEY="your*secret*key"
export EKSA_AWS_REGION="us-west-2"
Make sure you are authenticated with the AWS CLI
export AWS_ACCESS_KEY_ID="your*access*id"
export AWS_SECRET_ACCESS_KEY="your*secret*key"
aws sts get-caller-identity
Login to docker
aws ecr get-login-password --region us-west-2 |docker login --username AWS --password-stdin 783794618700.dkr.ecr.us-west-2.amazonaws.com
Verify you can pull an image
docker pull 783794618700.dkr.ecr.us-west-2.amazonaws.com/emissary-ingress/emissary:v3.0.0-9ded128b4606165b41aca52271abe7fa44fa7109
If the image downloads successfully, it worked!
Discover curated packages
You can get a list of the available packages from the command line:
export CLUSTER_NAME=nameofyourcluster
export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
eksctl anywhere list packages --kube-version 1.23
Example command output:
Package Version(s)
------- ----------
hello-eks-anywhere 0.1.2-a6847010915747a9fc8a412b233a2b1ee608ae76
adot 0.25.0-c26690f90d38811dbb0e3dad5aea77d1efa52c7b
cert-manager 1.9.1-dc0c845b5f71bea6869efccd3ca3f2dd11b5c95f
cluster-autoscaler 9.21.0-1.23-5516c0368ff74d14c328d61fe374da9787ecf437
harbor 2.5.1-ee7e5a6898b6c35668a1c5789aa0d654fad6c913
metallb 0.13.7-758df43f8c5a3c2ac693365d06e7b0feba87efd5
metallb-crds 0.13.7-758df43f8c5a3c2ac693365d06e7b0feba87efd5
metrics-server 0.6.1-eks-1-23-6-c94ed410f56421659f554f13b4af7a877da72bc1
emissary 3.3.0-cbf71de34d8bb5a72083f497d599da63e8b3837b
emissary-crds 3.3.0-cbf71de34d8bb5a72083f497d599da63e8b3837b
prometheus 2.41.0-b53c8be243a6cc3ac2553de24ab9f726d9b851ca
Generate a curated-packages config
The example shows how to install the harbor
package from the curated package list
.
export CLUSTER_NAME=nameofyourcluster
eksctl anywhere generate package harbor --cluster ${CLUSTER_NAME} --kube-version 1.23 > packages.yaml
Available curated packages and troubleshooting guides are listed below.
Install package controller after installation
If you created a cluster without the package controller or if the package controller was not properly configured, you may need to do some things to enable it.
Make sure you are authenticated with the AWS CLI. Use the credentials you set up for packages. These credentials should have limited capabilities
:
export AWS_ACCESS_KEY_ID="your*access*id"
export AWS_SECRET_ACCESS_KEY="your*secret*key"
export EKSA_AWS_ACCESS_KEY_ID="your*access*id"
export EKSA_AWS_SECRET_ACCESS_KEY="your*secret*key"
Verify your credentials are working:
aws sts get-caller-identity
Login to docker
aws ecr get-login-password |docker login --username AWS --password-stdin 783794618700.dkr.ecr.us-west-2.amazonaws.com
Verify you can pull an image
docker pull 783794618700.dkr.ecr.us-west-2.amazonaws.com/emissary-ingress/emissary:v3.0.0-9ded128b4606165b41aca52271abe7fa44fa7109
If the image downloads successfully, it worked!
If you do not have the package controller installed (it is installed by default), install it now:
eksctl anywhere install packagecontroller -f cluster.yaml
If you had the package controller disabled, you may need to modify your cluster.yaml
to enable it.
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
name: billy
spec:
packages:
disable: false
You may need to create or update your credentials which you can do with a command like this. Set the environment variables to the proper values before running the command.
kubectl delete secret -n eksa-packages aws-secret
kubectl create secret -n eksa-packages generic aws-secret \
--from-literal=AWS_ACCESS_KEY_ID=${EKSA_AWS_ACCESS_KEY_ID} \
--from-literal=AWS_SECRET_ACCESS_KEY=${EKSA_AWS_SECRET_ACCESS_KEY} \
--from-literal=REGION=${EKSA_AWS_REGION}
If you recreate secrets, you can manually re-enable the cronjob and run the job to update the image pull secrets:
kubectl get cronjob -n eksa-packages cron-ecr-renew -o yaml | yq e '.spec.suspend |= false' - | kubectl apply -f -
kubectl create job -n eksa-packages --from=cronjob/cron-ecr-renew run-it-now
4.4.1 - Package Prerequisites
Prerequisites for using curated packages
Prerequisites
Before installing any curated packages for EKS Anywhere, do the following:
-
Check that the cluster Kubernetes
version is v1.21
or above. For example, you could run kubectl get cluster -o yaml <cluster-name> | grep -i kubernetesVersion
-
Check that the version of eksctl anywhere
is v0.11.0
or above with the eksctl anywhere version
command.
-
It is recommended that the package controller is only installed on the management cluster.
-
Check the existence of package controller:
kubectl get pods -n eksa-packages | grep "eks-anywhere-packages"
If the returned result is empty, you need to install the package controller.
-
Install the package controller if it is not installed:
Install the package controller
Note This command is temporarily provided to ease integration with curated packages. This command will be deprecated in the future
eksctl anywhere install packagecontroller -f $CLUSTER_NAME.yaml
4.4.2 - Curated Packages Troubleshooting
Troubleshooting specific to curated packages
General debugging
The major component of Curated Packages is the package controller. If the container is not running or not running correctly, packages will not be installed. Generally it should be debugged like any other Kubernetes application. The first step is to check that the pod is running.
kubectl get pods -n eksa-packages
You should see at least two pods with running and one or more refresher completed.
NAME READY STATUS RESTARTS AGE
eks-anywhere-packages-69d7bb9dd9-9d47l 1/1 Running 0 14s
eksa-auth-refresher-w82nm 0/1 Completed 0 10s
The describe command might help to get more detail on why there is a problem:
kubectl describe pods -n eksa-packages
Logs of the controller can be seen in a normal Kubernetes fashion:
kubectl logs deploy/eks-anywhere-packages -n eksa-packages controller
To get the general state of the package controller, run the following command:
kubectl get packages,packagebundles,packagebundlecontrollers -A
You should see an active packagebundlecontroller and an available bundle. The packagebundlecontroller should indicate the active bundle. It may take a few minutes to download and activate the latest bundle. The state of the package in this example is installing and there is an error downloading the chart.
NAMESPACE NAME PACKAGE AGE STATE CURRENTVERSION TARGETVERSION DETAIL
eksa-packages-sammy package.packages.eks.amazonaws.com/my-hello hello-eks-anywhere 42h installed 0.1.1-bc7dc6bb874632972cd92a2bca429a846f7aa785 0.1.1-bc7dc6bb874632972cd92a2bca429a846f7aa785 (latest)
eksa-packages-tlhowe package.packages.eks.amazonaws.com/my-hello hello-eks-anywhere 44h installed 0.1.1-083e68edbbc62ca0228a5669e89e4d3da99ff73b 0.1.1-083e68edbbc62ca0228a5669e89e4d3da99ff73b (latest)
NAMESPACE NAME STATE
eksa-packages packagebundle.packages.eks.amazonaws.com/v1-21-83 available
eksa-packages packagebundle.packages.eks.amazonaws.com/v1-23-70 available
eksa-packages packagebundle.packages.eks.amazonaws.com/v1-23-81 available
eksa-packages packagebundle.packages.eks.amazonaws.com/v1-23-82 available
eksa-packages packagebundle.packages.eks.amazonaws.com/v1-23-83 available
NAMESPACE NAME ACTIVEBUNDLE STATE DETAIL
eksa-packages packagebundlecontroller.packages.eks.amazonaws.com/sammy v1-23-70 upgrade available v1-23-83 available
eksa-packages packagebundlecontroller.packages.eks.amazonaws.com/tlhowe v1-21-83 active active
Package controller not running
If you do not see a pod or various resources for the package controller, it may be that it is not installed.
No resources found in eksa-packages namespace.
Most likely the cluster was created with an older version of the EKS Anywhere CLI. Curated packages became generally available with v0.11.0
. Use the eksctl anywhere version
command to verify you are running a new enough release and you can use the eksctl anywhere install packagecontroller
command to install the package controller on an older release.
Error: this command is currently not supported
Error: this command is currently not supported
Curated packages became generally available with version v0.11.0
. Use the version command to make sure you are running version v0.11.0
or later:
Error: cert-manager is not present in the cluster
Error: curated packages cannot be installed as cert-manager is not present in the cluster
This is most likely caused by an action to install curated packages at a workload cluster with eksctl anywhere
version older than v0.12.0
. In order to use packages on workload clusters, please upgrade eksctl anywhere
version to v0.12+
. The package manager will remotely manage packages on the workload cluster from the management cluster.
Package registry authentication
Error: ImagePullBackOff on Package
If a package fails to start with ImagePullBackOff:
NAME READY STATUS RESTARTS AGE
generated-harbor-jobservice-564d6fdc87 0/1 ImagePullBackOff 0 2d23h
If a package pod cannot pull images, you may not have your AWS credentials set up properly. Verify that your credentials are working properly.
Make sure you are authenticated with the AWS CLI. Use the credentials you set up for packages. These credentials should have limited capabilities
:
export AWS_ACCESS_KEY_ID="your*access*id"
export AWS_SECRET_ACCESS_KEY="your*secret*key"
aws sts get-caller-identity
Login to docker
aws ecr get-login-password |docker login --username AWS --password-stdin 783794618700.dkr.ecr.us-west-2.amazonaws.com
Verify you can pull an image
docker pull 783794618700.dkr.ecr.us-west-2.amazonaws.com/emissary-ingress/emissary:v3.0.0-9ded128b4606165b41aca52271abe7fa44fa7109
If the image downloads successfully, it worked!
You may need to create or update your credentials which you can do with a command like this. Set the environment variables to the proper values before running the command.
kubectl delete secret -n eksa-packages aws-secret
kubectl create secret -n eksa-packages generic aws-secret --from-literal=AWS_ACCESS_KEY_ID=${EKSA_AWS_ACCESS_KEY_ID} --from-literal=AWS_SECRET_ACCESS_KEY=${EKSA_AWS_SECRET_ACCESS_KEY} --from-literal=REGION=${EKSA_AWS_REGION}
If you recreate secrets, you can manually re-enable the cronjob and run the job to update the image pull secrets:
kubectl get cronjob -n eksa-packages cron-ecr-renew -o yaml | yq e '.spec.suspend |= false' - | kubectl apply -f -
kubectl create job -n eksa-packages --from=cronjob/cron-ecr-renew run-it-now
Warning: not able to trigger cron job
secret/aws-secret created
Warning: not able to trigger cron job, please be aware this will prevent the package controller from installing curated packages.
This is most likely caused by an action to install curated packages in a cluster that is running Kubernetes
at version v1.20
or below. Note curated packages only support Kubernetes
v1.21
and above.
Package on workload clusters
Starting at eksctl anywhere
version v0.12.0
, packages on workload clusters are remotely managed by the management cluster. While interacting with the package resources by the following commands for a workload cluster, please make sure the kubeconfig is pointing to the management cluster that was used to create the workload cluster.
Package manager is not managing packages on workload cluster
If the package manager is not managing packages on a workload cluster, make sure the management cluster has various resources for the workload cluster:
kubectl get packages,packagebundles,packagebundlecontrollers -A
You should see a PackageBundleController for the workload cluster named with the name of the workload cluster and the status should be set. There should be a namespace for the workload cluster as well:
kubectl get ns | grep eksa-packagess
Create a PackageBundlecController for the workload cluster if it does not exist (where billy here is the cluster name):
cat <<! | k apply -f -
apiVersion: packages.eks.amazonaws.com/v1alpha1
kind: PackageBundleController
metadata:
name: billy
namespace: eksa-packages
!
Workload cluster is disconnected
Cluster is disconnected:
NAMESPACE NAME ACTIVEBUNDLE STATE DETAIL
eksa-packages packagebundlecontroller.packages.eks.amazonaws.com/billy disconnected initializing target client: getting kubeconfig for cluster "billy": Secret "billy-kubeconfig" not found
In the example above, the secret does not exist which may be that the management cluster is not managing the cluster, the PackageBundleController name is wrong or the secret was deleted.
This also may happen if the management cluster cannot communicate with the workload cluster or the workload cluster was deleted, although the detail would be different.
Error: the server doesn’t have a resource type “packages”
All packages are remotely managed by the management cluster, and packages, packagebundles, and packagebundlecontrollers resources are all deployed on the management cluster. Please make sure the kubeconfig is pointing to the management cluster that was used to create the workload cluster while interacting with package-related resources.
A package command run on a cluster that does not seem to be managed by the management cluster. To get a list of the clusters managed by the management cluster run the following command:
eksctl anywhere get packagebundlecontroller
NAME ACTIVEBUNDLE STATE DETAIL
billy v1-21-87 active
There will be one packagebundlecontroller for each cluster that is being managed. The only valid cluster name in the above example is billy
.
4.4.3 - Cert-Manager
Install/update/upgrade/uninstall Cert-Manager
If you have not already done so, make sure your cluster meets the package prerequisites.
Be sure to refer to the troubleshooting guide
in the event of a problem.
Important
- Starting at
eksctl anywhere
version v0.12.0
, packages on workload clusters are remotely managed by the management cluster.
- While following this guide to install packages on a workload cluster, please make sure the
kubeconfig
is pointing to the management cluster that was used to create the workload cluster. The only exception is the kubectl create namespace
command below, which should be run with kubeconfig
pointing to the workload cluster.
Install on workload cluster
NOTE: The cert-manager package can only be installed on a workload cluster
-
Generate the package configuration
eksctl anywhere generate package cert-manager --cluster <cluster-name> > cert-manager.yaml
-
Add the desired configuration to cert-manager.yaml
Please see complete configuration options
for all configuration options and their default values.
Example package file configuring a cert-manager package to run on a workload cluster.
apiVersion: packages.eks.amazonaws.com/v1alpha1
kind: Package
metadata:
name: my-cert-manager
namespace: eksa-packages-<cluster-name>
spec:
packageName: cert-manager
targetNamespace: <namespace-to-install-component>
-
Install Cert-Manager
eksctl anywhere create packages -f cert-manager.yaml
-
Validate the installation
eksctl anywhere get packages --cluster <cluster-name>
Example command output
NAME PACKAGE AGE STATE CURRENTVERSION TARGETVERSION DETAIL
my-cert-manager cert-manager 15s installed 1.9.1-dc0c845b5f71bea6869efccd3ca3f2dd11b5c95f 1.9.1-dc0c845b5f71bea6869efccd3ca3f2dd11b5c95f (latest)
Update
To update package configuration, update cert-manager.yaml file, and run the following command:
eksctl anywhere apply package -f cert-manager.yaml
Upgrade
Cert-Manager will automatically be upgraded when a new bundle is activated.
Uninstall
To uninstall cert-manager, simply delete the package
eksctl anywhere delete package --cluster <cluster-name> cert-manager
4.4.4 - Cluster Autoscaler
Install/upgrade/uninstall Cluster Autoscaler
If you have not already done so, make sure your cluster meets the package prerequisites.
Be sure to refer to the troubleshooting guide
in the event of a problem.
Important
- Starting at
eksctl anywhere
version v0.12.0
, packages on workload clusters are remotely managed by the management cluster.
- While following this guide to install packages on a workload cluster, please make sure the
kubeconfig
is pointing to the management cluster that was used to create the workload cluster. The only exception is the kubectl create namespace
command below, which should be run with kubeconfig
pointing to the workload cluster.
Choose a Deployment Approach
Each Cluster Autoscaler instance can target one cluster for autoscaling.
There are three ways to deploy a Cluster Autoscaler instance:
- Cluster Autoscaler deployed in the management cluster to autoscale the management cluster itself
- Cluster Autoscaler deployed in the management cluster to autoscale a remote workload cluster
- Cluster Autoscaler deployed in the workload cluster to autoscale the workload cluster itself
To read more about the tradeoffs of these different approaches, see here
.
Install Cluster Autoscaler in management cluster
-
Ensure you have configured at least one WorkerNodeGroup in your cluster to support autoscaling as outlined here
-
Generate the package configuration
eksctl anywhere generate package cluster-autoscaler --cluster <cluster-name> > cluster-autoscaler.yaml
-
Add the desired configuration to cluster-autoscaler.yaml
Please see complete configuration options
for all configuration options and their default values.
Example package file configuring a cluster autoscaler package to run in the management cluster.
Note: Here, the <cluster-name>
value represents the name of the management or workload cluster you would like to autoscale.
apiVersion: packages.eks.amazonaws.com/v1alpha1
kind: Package
metadata:
name: cluster-autoscaler
namespace: eksa-packages-<cluster-name>
spec:
packageName: cluster-autoscaler
targetNamespace: <namespace-to-install-component>
config: |-
cloudProvider: "clusterapi"
autoDiscovery:
clusterName: "<cluster-name>"
-
Install Cluster Autoscaler
eksctl anywhere create packages -f cluster-autoscaler.yaml
-
Validate the installation
eksctl anywhere get packages --cluster <cluster-name>
Example command output
NAMESPACE NAME PACKAGE AGE STATE CURRENTVERSION TARGETVERSION DETAIL
eksa-packages-mgmt-v-vmc cluster-autoscaler cluster-autoscaler 18h installed 9.21.0-1.21-147e2a701f6ab625452fe311d5c94a167270f365 9.21.0-1.21-147e2a701f6ab625452fe311d5c94a167270f365 (latest)
Update
To update package configuration, update cluster-autoscaler.yaml file, and run the following command:
eksctl anywhere apply package -f cluster-autoscaler.yaml
Upgrade
Cluster Autoscaler will automatically be upgraded when a new bundle is activated.
Uninstall
To uninstall Cluster Autoscaler, simply delete the package
eksctl anywhere delete package --cluster <cluster-name> cluster-autoscaler
Install Cluster Autoscaler in workload cluster
A few extra steps are required to install cluster autoscaler in a workload cluster instead of the management cluster.
First, retrieve the management cluster’s kubeconfig secret:
kubectl -n eksa-system get secrets <management-cluster-name>-kubeconfig -o yaml > mgmt-secret.yaml
Update the secret’s namespace to the namespace in the workload cluster that you would like to deploy the cluster autoscaler to.
Then, apply the secret to the workload cluster.
kubectl --kubeconfig /path/to/workload/kubeconfig apply -f mgmt-secret.yaml
Now apply this package configuration to the management cluster:
apiVersion: packages.eks.amazonaws.com/v1alpha1
kind: Package
metadata:
name: workload-cluster-autoscaler
namespace: eksa-packages-<workload-cluster-name>
spec:
packageName: cluster-autoscaler
targetNamespace: <workload-cluster-namespace-to-install-components>
config: |-
cloudProvider: "clusterapi"
autoDiscovery:
clusterName: "<workload-cluster-name>"
clusterAPIMode: "incluster-kubeconfig"
clusterAPICloudConfigPath: "/etc/kubernetes/value"
extraVolumeSecrets:
cluster-autoscaler-cloud-config:
mountPath: "/etc/kubernetes"
name: "<management-cluster-name>-kubeconfig"
4.4.5 - Metrics Server
Install/upgrade/uninstall Metrics Server
If you have not already done so, make sure your cluster meets the package prerequisites.
Be sure to refer to the troubleshooting guide
in the event of a problem.
Important
- Starting at
eksctl anywhere
version v0.12.0
, packages on workload clusters are remotely managed by the management cluster.
- While following this guide to install packages on a workload cluster, please make sure the
kubeconfig
is pointing to the management cluster that was used to create the workload cluster. The only exception is the kubectl create namespace
command below, which should be run with kubeconfig
pointing to the workload cluster.
Install
-
Generate the package configuration
eksctl anywhere generate package metrics-server --cluster <cluster-name> > metrics-server.yaml
-
Add the desired configuration to metrics-server.yaml
Please see complete configuration options
for all configuration options and their default values.
Example package file configuring a cluster autoscaler package to run on a management cluster.
apiVersion: packages.eks.amazonaws.com/v1alpha1
kind: Package
metadata:
name: metrics-server
namespace: eksa-packages-<cluster-name>
spec:
packageName: metrics-server
targetNamespace: <namespace-to-install-component>
config: |-
args:
- "--kubelet-insecure-tls"
-
Install Metrics Server
eksctl anywhere create packages -f metrics-server.yaml
-
Validate the installation
eksctl anywhere get packages --cluster <cluster-name>
Example command output
NAME PACKAGE AGE STATE CURRENTVERSION TARGETVERSION DETAIL
metrics-server metrics-server 8h installed 0.6.1-eks-1-23-6-b4c2524fabb3dd4c5f9b9070a418d740d3e1a8a2 0.6.1-eks-1-23-6-b4c2524fabb3dd4c5f9b9070a418d740d3e1a8a2 (latest)
Update
To update package configuration, update metrics-server.yaml file, and run the following command:
eksctl anywhere apply package -f metrics-server.yaml
Upgrade
Metrics Server will automatically be upgraded when a new bundle is activated.
Uninstall
To uninstall Metrics Server, simply delete the package
eksctl anywhere delete package --cluster <cluster-name> metrics-server
4.4.6 - AWS Distro for OpenTelemetry (ADOT)
Install/upgrade/uninstall ADOT
If you have not already done so, make sure your cluster meets the package prerequisites.
Be sure to refer to the troubleshooting guide
in the event of a problem.
Important
- Starting at
eksctl anywhere
version v0.12.0
, packages on workload clusters are remotely managed by the management cluster.
- While following this guide to install packages on a workload cluster, please make sure the
kubeconfig
is pointing to the management cluster that was used to create the workload cluster. The only exception is the kubectl create namespace
command below, which should be run with kubeconfig
pointing to the workload cluster.
Install
-
Generate the package configuration
eksctl anywhere generate package adot --cluster <cluster-name> > adot.yaml
-
Add the desired configuration to adot.yaml
Please see complete configuration options
for all configuration options and their default values.
Example package file with daemonSet
mode and default configuration:
apiVersion: packages.eks.amazonaws.com/v1alpha1
kind: Package
metadata:
name: my-adot
namespace: eksa-packages-<cluster-name>
spec:
packageName: adot
targetNamespace: observability
config: |
mode: daemonset
Example package file with deployment
mode and customized collector components to scrap
ADOT collector’s own metrics:
apiVersion: packages.eks.amazonaws.com/v1alpha1
kind: Package
metadata:
name: my-adot
namespace: eksa-packages-<cluster-name>
spec:
packageName: adot
targetNamespace: observability
config: |
mode: deployment
replicaCount: 2
config:
receivers:
prometheus:
config:
scrape_configs:
- job_name: opentelemetry-collector
scrape_interval: 10s
static_configs:
- targets:
- ${MY_POD_IP}:8888
processors:
batch: {}
memory_limiter: null
exporters:
logging:
loglevel: debug
prometheusremotewrite:
endpoint: "<prometheus-remote-write-end-point>"
extensions:
health_check: {}
memory_ballast: {}
service:
pipelines:
metrics:
receivers: [prometheus]
processors: [batch]
exporters: [logging, prometheusremotewrite]
telemetry:
metrics:
address: 0.0.0.0:8888
-
Create the namespace
(If overriding targetNamespace
, change observability
to the value of targetNamespace
)
kubectl create namespace observability
-
Install adot
eksctl anywhere create packages -f adot.yaml
-
Validate the installation
eksctl anywhere get packages --cluster <cluster-name>
Example command output
NAME PACKAGE AGE STATE CURRENTVERSION TARGETVERSION DETAIL
my-adot adot 19h installed 0.25.0-c26690f90d38811dbb0e3dad5aea77d1efa52c7b 0.25.0-c26690f90d38811dbb0e3dad5aea77d1efa52c7b (latest)
Update
To update package configuration, update adot.yaml file, and run the following command:
eksctl anywhere apply package -f adot.yaml
Upgrade
ADOT will automatically be upgraded when a new bundle is activated.
Uninstall
To uninstall ADOT, simply delete the package
eksctl anywhere delete package --cluster <cluster-name> my-adot
4.4.7 - Prometheus
Install/upgrade/uninstall Prometheus
If you have not already done so, make sure your cluster meets the package prerequisites.
Be sure to refer to the troubleshooting guide
in the event of a problem.
Important
- Starting at
eksctl anywhere
version v0.12.0
, packages on workload clusters are remotely managed by the management cluster.
- While following this guide to install packages on a workload cluster, please make sure the
kubeconfig
is pointing to the management cluster that was used to create the workload cluster. The only exception is the kubectl create namespace
command below, which should be run with kubeconfig
pointing to the workload cluster.
Install
-
Generate the package configuration
eksctl anywhere generate package prometheus --cluster <cluster-name> > prometheus.yaml
-
Add the desired configuration to prometheus.yaml
Please see complete configuration options
for all configuration options and their default values.
Example package file with default configuration, which enables prometheus-server and node-exporter:
apiVersion: packages.eks.amazonaws.com/v1alpha1
kind: Package
metadata:
name: generated-prometheus
namespace: eksa-packages-<cluster-name>
spec:
packageName: prometheus
Example package file with prometheus-server (or node-exporter) disabled:
apiVersion: packages.eks.amazonaws.com/v1alpha1
kind: Package
metadata:
name: generated-prometheus
namespace: eksa-packages-<cluster-name>
spec:
packageName: prometheus
config: |
# disable prometheus-server
server:
enabled: false
# or disable node-exporter
# nodeExporter:
# enabled: false
Example package file with prometheus-server deployed as a statefulSet with replicaCount 2, and set scrape config to collect Prometheus-server’s own metrics only:
apiVersion: packages.eks.amazonaws.com/v1alpha1
kind: Package
metadata:
name: generated-prometheus
namespace: eksa-packages-<cluster-name>
spec:
packageName: prometheus
targetNamespace: observability
config: |
server:
replicaCount: 2
statefulSet:
enabled: true
serverFiles:
prometheus.yml:
scrape_configs:
- job_name: prometheus
static_configs:
- targets:
- localhost:9090
-
Create the namespace
(If overriding targetNamespace
, change observability
to the value of targetNamespace
)
kubectl create namespace observability
-
Install prometheus
eksctl anywhere create packages -f prometheus.yaml
-
Validate the installation
eksctl anywhere get packages --cluster <cluster-name>
Example command output
NAMESPACE NAME PACKAGE AGE STATE CURRENTVERSION TARGETVERSION DETAIL
eksa-packages-<cluster-name> generated-prometheus prometheus 17m installed 2.41.0-b53c8be243a6cc3ac2553de24ab9f726d9b851ca 2.41.0-b53c8be243a6cc3ac2553de24ab9f726d9b851ca (latest)
Update
To update package configuration, update prometheus.yaml file, and run the following command:
eksctl anywhere apply package -f prometheus.yaml
Upgrade
Prometheus will automatically be upgraded when a new bundle is activated.
Uninstall
To uninstall Prometheus, simply delete the package
eksctl anywhere delete package --cluster <cluster-name> generated-prometheus
4.4.8 - Emissary Ingress
Install/upgrade/uninstall Emissary Ingress
If you have not already done so, make sure your cluster meets the package prerequisites.
Be sure to refer to the troubleshooting guide
in the event of a problem.
Important
- Starting at
eksctl anywhere
version v0.12.0
, packages on workload clusters are remotely managed by the management cluster.
- While following this guide to install packages on a workload cluster, please make sure the
kubeconfig
is pointing to the management cluster that was used to create the workload cluster. The only exception is the kubectl create namespace
command below, which should be run with kubeconfig
pointing to the workload cluster.
Install
-
Generate the package configuration
eksctl anywhere generate package emissary --cluster <cluster-name> > emissary.yaml
-
Add the desired configuration to emissary.yaml
Please see complete configuration options
for all configuration options and their default values.
Example package file with standard configuration.
apiVersion: packages.eks.amazonaws.com/v1alpha1
kind: Package
metadata:
name: emissary
namespace: eksa-packages-<cluster-name>
spec:
packageName: emissary
-
Install Emissary
eksctl anywhere create packages -f emissary.yaml
-
Validate the installation
eksctl anywhere get packages --cluster <cluster-name>
Example command output
NAMESPACE NAME PACKAGE AGE STATE CURRENTVERSION TARGETVERSION DETAIL
eksa-packages emissary emissary 2m57s installed 3.0.0-a507e09c2a92c83d65737835f6bac03b9b341467 3.0.0-a507e09c2a92c83d65737835f6bac03b9b341467 (latest)
Update
To update package configuration, update emissary.yaml file, and run the following command:
eksctl anywhere apply package -f emissary.yaml
Upgrade
Emissary will automatically be upgraded when a new bundle is activated.
Uninstall
To uninstall Emissary, simply delete the package
eksctl anywhere delete package --cluster <cluster-name> emissary
4.4.9 - Harbor
Install/upgrade/uninstall Harbor
If you have not already done so, make sure your cluster meets the package prerequisites.
Be sure to refer to the troubleshooting guide
in the event of a problem.
Important
- Starting at
eksctl anywhere
version v0.12.0
, packages on workload clusters are remotely managed by the management cluster.
- While following this guide to install packages on a workload cluster, please make sure the
kubeconfig
is pointing to the management cluster that was used to create the workload cluster. The only exception is the kubectl create namespace
command below, which should be run with kubeconfig
pointing to the workload cluster.
Install
-
Generate the package configuration
eksctl anywhere generate package harbor --cluster <cluster-name> > harbor.yaml
-
Add the desired configuration to harbor.yaml
Please see complete configuration options
for all configuration options and their default values.
Important
- All configuration options are listed in dot notations (e.g.,
expose.tls.enabled
) in the doc, but they have to be transformed to hierachical structures when specified in the config
section in the YAML spec.
- Harbor web portal is exposed through
NodePort
by default, and its default port number is 30003
with TLS enabled and 30002
with TLS disabled.
- TLS is enabled by default for connections to Harbor web portal, and a secret resource named
harbor-tls-secret
is required for that purpose. It can be provisioned through cert-manager or manually with the following command using self-signed certificate:
kubectl create secret tls harbor-tls-secret --cert=[path to certificate file] --key=[path to key file] -n eksa-packages
secretKey
has to be set as a string of 16 characters for encryption.
TLS example with auto certificate generation
apiVersion: packages.eks.amazonaws.com/v1alpha1
kind: Package
metadata:
name: my-harbor
namespace: eksa-packages-<cluster-name>
spec:
packageName: harbor
config: |-
secretKey: "use-a-secret-key"
externalURL: https://harbor.eksa.demo:30003
expose:
tls:
certSource: auto
auto:
commonName: "harbor.eksa.demo"
Non-TLS example
apiVersion: packages.eks.amazonaws.com/v1alpha1
kind: Package
metadata:
name: my-harbor
namespace: eksa-packages-<cluster-name>
spec:
packageName: harbor
config: |-
secretKey: "use-a-secret-key"
externalURL: http://harbor.eksa.demo:30002
expose:
tls:
enabled: false
-
Install Harbor
eksctl anywhere create packages -f harbor.yaml
-
Check Harbor
eksctl anywhere get packages --cluster <cluster-name>
Example command output
NAME PACKAGE AGE STATE CURRENTVERSION TARGETVERSION DETAIL
my-harbor harbor 5m34s installed v2.5.0 v2.5.0 (latest)
Harbor web portal is accessible at whatever externalURL
is set to. See complete configuration options
for all default values.

Update
To update package configuration, update harbor.yaml file, and run the following command:
eksctl anywhere apply package -f harbor.yaml
Upgrade
Note
- New versions of software packages will be automatically downloaded but not automatically installed. You can always manually run
eksctl
to check and install updates.
-
Verify a new bundle is available
eksctl anywhere get packagebundle
Example command output
NAME VERSION STATE
v1.21-1000 1.21 active (upgrade available)
v1.21-1001 1.21 inactive
-
Upgrade Harbor
eksctl anywhere upgrade packages --bundle-version v1.21-1001
-
Check Harbor
eksctl anywhere get packages --cluster <cluster-name>
Example command output
NAME PACKAGE AGE STATE CURRENTVERSION TARGETVERSION DETAIL
my-harbor Harbor 14m installed v2.5.1 v2.5.1 (latest)
Uninstall
-
Uninstall Harbor
Important
- By default, PVCs created for jobservice and registry are not removed during a package delete operation, which can be changed by leaving
persistence.resourcePolicy
empty.
eksctl anywhere delete package --cluster <cluster-name> my-harbor
4.4.10 - MetalLB
Install/upgrade/uninstall MetalLB
If you have not already done so, make sure your cluster meets the package prerequisites.
Be sure to refer to the troubleshooting guide
in the event of a problem.
Important
- Starting at
eksctl anywhere
version v0.12.0
, packages on workload clusters are remotely managed by the management cluster.
- While following this guide to install packages on a workload cluster, please make sure the
kubeconfig
is pointing to the management cluster that was used to create the workload cluster. The only exception is the kubectl create namespace
command below, which should be run with kubeconfig
pointing to the workload cluster.
Install
-
Generate the package configuration
eksctl anywhere generate package metallb --cluster <cluster-name> > metallb.yaml
-
Add the desired configuration to metallb.yaml
Please see complete configuration options
for all configuration options and their default values.
Example package file with bgp configuration:
apiVersion: packages.eks.amazonaws.com/v1alpha1
kind: Package
metadata:
name: mylb
namespace: eksa-packages-<cluster-name>
spec:
packageName: metallb
config: |
IPAddressPools:
- name: default
addresses:
- 10.220.0.93/32
- 10.220.0.97-10.220.0.120
BGPAdvertisements:
- ipAddressPools:
- default
BGPPeers:
- peerAddress: 10.220.0.2
peerASN: 65000
myASN: 65002
Example package file with ARP configuration:
apiVersion: packages.eks.amazonaws.com/v1alpha1
kind: Package
metadata:
name: mylb
namespace: eksa-packages
spec:
packageName: metallb
config: |
IPAddressPools:
- name: default
addresses:
- 10.220.0.93/32
- 10.220.0.97-10.220.0.120
L2Advertisements:
- ipAddressPools:
- default
-
Create the namespace
(If overriding targetNamespace
, change metallb-system
to the value of targetNamespace
)
kubectl create namespace metallb-system
-
Install MetalLB
eksctl anywhere create packages -f metallb.yaml
-
Validate the installation
eksctl anywhere get packages --cluster <cluster-name>
Example command output
NAME PACKAGE AGE STATE CURRENTVERSION TARGETVERSION DETAIL
mylb metallb 22h installed 0.13.5-ce5b5de19014202cebd4ab4c091830a3b6dfea06 0.13.5-ce5b5de19014202cebd4ab4c091830a3b6dfea06 (latest)
Update
To update package configuration, update metallb.yaml file, and run the following command:
eksctl anywhere apply package -f metallb.yaml
Upgrade
MetalLB will automatically be upgraded when a new bundle is activated.
Uninstall
To uninstall MetalLB, simply delete the package
eksctl anywhere delete package --cluster <cluster-name> mylb
5 - Reference
Reference documents for EKS Anywhere configuration
5.1 - Config
Config reference for EKS Anywhere clusters
5.1.1 - Bare metal configuration
Full EKS Anywhere configuration reference for a Bare Metal cluster.
This is a generic template with detailed descriptions below for reference.
The following additional optional configuration can also be included:
To generate your own cluster configuration, follow instructions from the Bare Metal Create production cluster
section and modify it using descriptions below.
For information on how to add cluster configuration settings to this file for advanced node configuration, see Advanced Bare Metal cluster configuration
.
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
name: my-cluster-name
spec:
clusterNetwork:
cniConfig:
cilium: {}
pods:
cidrBlocks:
- 192.168.0.0/16
services:
cidrBlocks:
- 10.96.0.0/12
controlPlaneConfiguration:
count: 1
endpoint:
host: "<Control Plane Endpoint IP>"
machineGroupRef:
kind: TinkerbellMachineConfig
name: my-cluster-name-cp
datacenterRef:
kind: TinkerbellDatacenterConfig
name: my-cluster-name
kubernetesVersion: "1.25"
managementCluster:
name: my-cluster-name
workerNodeGroupConfigurations:
- count: 1
machineGroupRef:
kind: TinkerbellMachineConfig
name: my-cluster-name
name: md-0
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: TinkerbellDatacenterConfig
metadata:
name: my-cluster-name
spec:
tinkerbellIP: "<Tinkerbell IP>"
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: TinkerbellMachineConfig
metadata:
name: my-cluster-name-cp
spec:
hardwareSelector: {}
osFamily: bottlerocket
templateRef: {}
users:
- name: ec2-user
sshAuthorizedKeys:
- ssh-rsa AAAAB3NzaC1yc2... jwjones@833efcab1482.home.example.com
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: TinkerbellMachineConfig
metadata:
name: my-cluster-name
spec:
hardwareSelector: {}
osFamily: bottlerocket
templateRef:
kind: TinkerbellTemplateConfig
name: my-cluster-name
users:
- name: ec2-user
sshAuthorizedKeys:
- ssh-rsa AAAAB3NzaC1yc2... jwjones@833efcab1482.home.example.com
Cluster Fields
name (required)
Name of your cluster (my-cluster-name
in this example).
clusterNetwork (required)
Specific network configuration for your Kubernetes cluster.
clusterNetwork.cniConfig (required)
CNI plugin to be installed in the cluster. The only supported value at the moment is cilium
.
clusterNetwork.pods.cidrBlocks[0] (required)
Subnet used by pods in CIDR notation. Please note that only 1 custom pods CIDR block specification is permitted.
This CIDR block should not conflict with the clusterNetwork.services.cidrBlocks
and network subnet range selected for the machines.
clusterNetwork.services.cidrBlocks[0] (required)
Subnet used by services in CIDR notation. Please note that only 1 custom services CIDR block specification is permitted.
This CIDR block should not conflict with the clusterNetwork.pods.cidrBlocks
and network subnet range selected for the machines.
clusterNetwork.dns.resolvConf.path (optional)
Path to the file with a custom DNS resolver configuration.
controlPlaneConfiguration (required)
Specific control plane configuration for your Kubernetes cluster.
controlPlaneConfiguration.count (required)
Number of control plane nodes.
This number needs to be odd to maintain ETCD quorum.
controlPlaneConfiguration.endpoint.host (required)
A unique IP you want to use for the control plane in your EKS Anywhere cluster. Choose an IP in your network
range that does not conflict with other machines.
NOTE: This IP should be outside the network DHCP range as it is a floating IP that gets assigned to one of
the control plane nodes for kube-apiserver loadbalancing.
controlPlaneConfiguration.machineGroupRef (required)
Refers to the Kubernetes object with Tinkerbell-specific configuration for your nodes. See TinkerbellMachineConfig Fields
below.
controlPlaneConfiguration.taints
A list of taints to apply to the control plane nodes of the cluster.
Replaces the default control plane taint (For k8s versions prior to 1.24, node-role.kubernetes.io/master
. For k8s versions 1.24+, node-role.kubernetes.io/control-plane
). The default control plane components will tolerate the provided taints.
Modifying the taints associated with the control plane configuration will cause new nodes to be rolled-out, replacing the existing nodes.
NOTE: The taints provided will be used instead of the default control plane taint.
Any pods that you run on the control plane nodes must tolerate the taints you provide in the control plane configuration.
controlPlaneConfiguration.labels
A list of labels to apply to the control plane nodes of the cluster. This is in addition to the labels that
EKS Anywhere will add by default.
Modifying the labels associated with the control plane configuration will cause new nodes to be rolled out, replacing
the existing nodes.
datacenterRef
Refers to the Kubernetes object with Tinkerbell-specific configuration. See TinkerbellDatacenterConfig Fields
below.
kubernetesVersion (required)
The Kubernetes version you want to use for your cluster. Supported values: 1.25
, 1.24
, 1.23
, 1.22
, 1.21
managementCluster
Identifies the name of the management cluster.
If this is a standalone cluster or if it were serving as the management cluster for other workload clusters, this will be the same as the cluster name.
Bare Metal EKS Anywhere clusters do not yet support the creation of separate workload clusters.
workerNodeGroupConfigurations
This takes in a list of node groups that you can define for your workers.
You can omit workerNodeGroupConfigurations when creating Bare Metal clusters. In this case, control plane nodes will not be tainted and all pods will run on the control plane nodes. This mechanism can be used to deploy Bare Metal clusters on a single server.
NOTE: Empty workerNodeGroupConfigurations is not supported when Kubernetes version <= 1.21.
workerNodeGroupConfigurations.count
Number of worker nodes. Optional if autoscalingConfiguration is used, in which case count will default to autoscalingConfiguration.minCount
.
workerNodeGroupConfigurations.machineGroupRef (required)
Refers to the Kubernetes object with Tinkerbell-specific configuration for your nodes. See TinkerbellMachineConfig Fields
below.
workerNodeGroupConfigurations.name (required)
Name of the worker node group (default: md-0)
workerNodeGroupConfigurations.autoscalingConfiguration.minCount
Minimum number of nodes for this node group’s autoscaling configuration.
workerNodeGroupConfigurations.autoscalingConfiguration.maxCount
Maximum number of nodes for this node group’s autoscaling configuration.
workerNodeGroupConfigurations.taints
A list of taints to apply to the nodes in the worker node group.
Modifying the taints associated with a worker node group configuration will cause new nodes to be rolled-out, replacing the existing nodes associated with the configuration.
At least one node group must not have NoSchedule
or NoExecute
taints applied to it.
workerNodeGroupConfigurations.labels
A list of labels to apply to the nodes in the worker node group. This is in addition to the labels that
EKS Anywhere will add by default.
Modifying the labels associated with a worker node group configuration will cause new nodes to be rolled out, replacing
the existing nodes associated with the configuration.
TinkerbellDatacenterConfig Fields
tinkerbellIP
Required field to identify the IP address of the Tinkerbell service.
This IP address must be a unique IP in the network range that does not conflict with other IPs.
Once the Tinkerbell services move from the Admin machine to run on the target cluster, this IP address makes it possible for the stack to be used for future provisioning needs.
When separate management and workload clusters are supported in Bare Metal, the IP address becomes a necessity.
osImageURL
Optional field to replace the default Bottlerocket operating system. EKS Anywhere can only auto-import Bottlerocket. In order to use Ubuntu or Redhat see building baremetal node images
to learn more on building and using Ubuntu with an EKS Anywhere cluster. This field is also useful if you want to provide a customized operating system image or simply host the standard image locally.
hookImagesURLPath
Optional field to replace the HookOS image.
This field is useful if you want to provide a customized HookOS image or simply host the standard image locally.
See Artifacts
for details.
Example TinkerbellDatacenterConfig.spec
spec:
tinkerbellIP: "192.168.0.10" # Available, routable IP
osImageURL: "http://my-web-server/ubuntu-v1.23.7-eks-a-12-amd64.gz" # Full URL to the OS Image hosted locally
hookImagesURLPath: "http://my-web-server/hook" # Path to the hook images. This path must contain vmlinuz-x86_64 and initramfs-x86_64
This is the folder structure for my-web-server
:
my-web-server
├── hook
│ ├── initramfs-x86_64
│ └── vmlinuz-x86_64
└── ubuntu-v1.23.7-eks-a-12-amd64.gz
skipLoadBalancerDeployment
Optional field to skip deploying the default load balancer for Tinkerbell stack.
EKS Anywhere for Bare Metal uses kube-vip
load balancer by default to expose the Tinkerbell stack externally.
You can disable this feature by setting this field to true
.
NOTE: If you skip load balancer deployment, you will have to ensure that the Tinkerbell stack is available at tinkerbellIP
once the cluster creation is finished. One way to achieve this is by using the MetalLB
package.
TinkerbellMachineConfig Fields
In the example, there are TinkerbellMachineConfig
sections for control plane (my-cluster-name-cp
) and worker (my-cluster-name
) machine groups.
The following fields identify information needed to configure the nodes in each of those groups.
NOTE: Currently, you can only have one machine group for all machines in the control plane, although you can have multiple machine groups for the workers.
hardwareSelector
Use fields under hardwareSelector
to add key/value pair labels to match particular machines that you identified in the CSV file where you defined the machines in your cluster.
Choose any label name you like.
For example, if you had added the label node=cp-machine
to the machines listed in your CSV file that you want to be control plane nodes, the following hardwareSelector
field would cause those machines to be added to the control plane:
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: TinkerbellMachineConfig
metadata:
name: my-cluster-name-cp
spec:
hardwareSelector:
node: "cp-machine"
osFamily (required)
Operating system on the machine. For example, bottlerocket
or ubuntu
.
templateRef (optional)
Identifies the template that defines the actions that will be applied to the TinkerbellMachineConfig.
See TinkerbellTemplateConfig fields below.
EKS Anywhere will generate default templates based on osFamily
during the create
command.
You can override this default template by providing your own template here.
users
The name of the user you want to configure to access your virtual machines through SSH.
The default is ec2-user
.
Currently, only one user is supported.
users[0].sshAuthorizedKeys (optional)
The SSH public keys you want to configure to access your machines through SSH (as described below). Only 1 is supported at this time.
users[0].sshAuthorizedKeys[0] (optional)
This is the SSH public key that will be placed in authorized_keys
on all EKS Anywhere cluster machines so you can SSH into
them. The user will be what is defined under name
above. For example:
ssh -i <private-key-file> <user>@<machine-IP>
The default is generating a key in your $(pwd)/<cluster-name>
folder when not specifying a value.
When you generate a Bare Metal cluster configuration, the TinkerbellTemplateConfig
is kept internally and not shown in the generated configuration file.
TinkerbellTemplateConfig
settings define the actions done to install each node, such as get installation media, configure networking, add users, and otherwise configure the node.
Advanced users can override the default values set for TinkerbellTemplateConfig
.
They can also add their own Tinkerbell actions
to make personalized modifications to EKS Anywhere nodes.
The following shows two TinkerbellTemplateConfig
examples that you can add to your cluster configuration file to override the values that EKS Anywhere sets: one for Ubuntu and one for Bottlerocket.
Most actions used differ for different operating systems.
NOTE: For the stream-image
action, DEST_DISK
points to the device representing the entire hard disk (for example, /dev/sda
).
For UEFI-enabled images, such as Ubuntu, write actions use DEST_DISK
to point to the second partition (for example, /dev/sda2
), with the first being the EFI partition.
For the Bottlerocket image, which has 12 partitions, DEST_DISK
is partition 12 (for example, /dev/sda12
).
Device names will be different for different disk types.
Ubuntu TinkerbellTemplateConfig example
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: TinkerbellTemplateConfig
metadata:
name: my-cluster-name
spec:
template:
global_timeout: 6000
id: ""
name: my-cluster-name
tasks:
- actions:
- environment:
COMPRESSED: "true"
DEST_DISK: /dev/sda
IMG_URL: https://my-file-server/ubuntu-v1.23.7-eks-a-12-amd64.gz
image: public.ecr.aws/eks-anywhere/tinkerbell/hub/image2disk:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
name: stream-image
timeout: 360
- environment:
DEST_DISK: /dev/sda2
DEST_PATH: /etc/netplan/config.yaml
STATIC_NETPLAN: true
DIRMODE: "0755"
FS_TYPE: ext4
GID: "0"
MODE: "0644"
UID: "0"
image: public.ecr.aws/eks-anywhere/tinkerbell/hub/writefile:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
name: write-netplan
timeout: 90
- environment:
CONTENTS: |
datasource:
Ec2:
metadata_urls: [<admin-machine-ip>, <tinkerbell-ip-from-cluster-config>]
strict_id: false
manage_etc_hosts: localhost
warnings:
dsid_missing_source: off
DEST_DISK: /dev/sda2
DEST_PATH: /etc/cloud/cloud.cfg.d/10_tinkerbell.cfg
DIRMODE: "0700"
FS_TYPE: ext4
GID: "0"
MODE: "0600"
UID: "0"
image: public.ecr.aws/eks-anywhere/tinkerbell/hub/writefile:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
name: add-tink-cloud-init-config
timeout: 90
- environment:
CONTENTS: |
network:
config: disabled
DEST_DISK: /dev/sda2
DEST_PATH: /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg
DIRMODE: "0700"
FS_TYPE: ext4
GID: "0"
MODE: "0600"
UID: "0"
image: public.ecr.aws/eks-anywhere/tinkerbell/hub/writefile:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
name: disable-cloud-init-network-capabilities
timeout: 90
- environment:
CONTENTS: |
datasource: Ec2
DEST_DISK: /dev/sda2
DEST_PATH: /etc/cloud/ds-identify.cfg
DIRMODE: "0700"
FS_TYPE: ext4
GID: "0"
MODE: "0600"
UID: "0"
image: public.ecr.aws/eks-anywhere/tinkerbell/hub/writefile:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
name: add-tink-cloud-init-ds-config
timeout: 90
- environment:
BLOCK_DEVICE: /dev/sda2
FS_TYPE: ext4
image: public.ecr.aws/eks-anywhere/tinkerbell/hub/kexec:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
name: kexec-image
pid: host
timeout: 90
name: my-cluster-name
volumes:
- /dev:/dev
- /dev/console:/dev/console
- /lib/firmware:/lib/firmware:ro
worker: '{{.device_1}}'
version: "0.1"
Bottlerocket TinkerbellTemplateConfig example
Pay special attention to the BOOTCONFIG_CONTENTS
environment section below if you wish to set up console redirection for the kernel and systemd.
If you are only using a direct attached monitor as your primary display device, no additional configuration is needed here.
However, if you need all boot output to be shown via a server’s serial console for example, extra configuration should be provided inside BOOTCONFIG_CONTENTS
.
An empty kernel {}
key is provided below in the example; inside this key is where you will specify your console devices.
You may specify multiple comma delimited console devices in quotes to a console key as such: console = "tty0", "ttyS0,115200n8"
.
The order of the devices is significant; systemd will output to the last device specified.
The console key belongs inside the kernel key like so:
kernel {
console = "tty0", "ttyS0,115200n8"
}
The above example will send all kernel output to both consoles, and systemd output to ttyS0
.
Additional information about serial console setup can be found in the Linux kernel documentation
.
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: TinkerbellTemplateConfig
metadata:
name: my-cluster-name
spec:
template:
global_timeout: 6000
id: ""
name: my-cluster-name
tasks:
- actions:
- environment:
COMPRESSED: "true"
DEST_DISK: /dev/sda
IMG_URL: https://anywhere-assets.eks.amazonaws.com/releases/bundles/11/artifacts/raw/1-22/bottlerocket-v1.22.10-eks-d-1-22-8-eks-a-11-amd64.img.gz
image: public.ecr.aws/eks-anywhere/tinkerbell/hub/image2disk:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
name: stream-image
timeout: 360
- environment:
# An example console declaration that will send all kernel output to both consoles, and systemd output to ttyS0.
# kernel {
# console = "tty0", "ttyS0,115200n8"
# }
BOOTCONFIG_CONTENTS: |
kernel {}
DEST_DISK: /dev/sda12
DEST_PATH: /bootconfig.data
DIRMODE: "0700"
FS_TYPE: ext4
GID: "0"
MODE: "0644"
UID: "0"
image: public.ecr.aws/eks-anywhere/tinkerbell/hub/writefile:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
name: write-bootconfig
timeout: 90
- environment:
CONTENTS: |
# Version is required, it will change as we support
# additional settings
version = 1
# "eno1" is the interface name
# Users may turn on dhcp4 and dhcp6 via boolean
[eno1]
dhcp4 = true
# Define this interface as the "primary" interface
# for the system. This IP is what kubelet will use
# as the node IP. If none of the interfaces has
# "primary" set, we choose the first interface in
# the file
primary = true
DEST_DISK: /dev/sda12
DEST_PATH: /net.toml
DIRMODE: "0700"
FS_TYPE: ext4
GID: "0"
MODE: "0644"
UID: "0"
image: public.ecr.aws/eks-anywhere/tinkerbell/hub/writefile:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
name: write-netconfig
timeout: 90
- environment:
HEGEL_URL: http://<hegel-ip>:50061
DEST_DISK: /dev/sda12
DEST_PATH: /user-data.toml
DIRMODE: "0700"
FS_TYPE: ext4
GID: "0"
MODE: "0644"
UID: "0"
image: public.ecr.aws/eks-anywhere/tinkerbell/hub/writefile:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
name: write-user-data
timeout: 90
- name: "reboot"
image: public.ecr.aws/eks-anywhere/tinkerbell/hub/reboot:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
timeout: 90
volumes:
- /worker:/worker
name: my-cluster-name
volumes:
- /dev:/dev
- /dev/console:/dev/console
- /lib/firmware:/lib/firmware:ro
worker: '{{.device_1}}'
version: "0.1"
TinkerbellTemplateConfig Fields
The values in the TinkerbellTemplateConfig
fields are created from the contents of the CSV file used to generate a configuration.
The template contains actions that are performed on a Bare Metal machine when it first boots up to be provisioned.
For advanced users, you can add these fields to your cluster configuration file if you have special needs to do so.
While there are fields that apply to all provisioned operating systems, actions are specific to each operating system.
Examples below describe actions for Ubuntu and Bottlerocket operating systems.
template.global_timeout
Sets the timeout value for completing the configuration. Set to 6000 (100 minutes) by default.
template.id
Not set by default.
template.tasks
Within the TinkerbellTemplateConfig template
under tasks
is a set of actions.
The following descriptions cover the actions shown in the example templates for Ubuntu and Bottlerocket:
template.tasks.actions.name.stream-image (Ubuntu and Bottlerocket)
The stream-image
action streams the selected image to the machine you are provisioning. It identifies:
- environment.COMPRESSED: When set to
true
, Tinkerbell expects IMG_URL
to be a compressed image, which Tinkerbell will uncompress when it writes the contents to disk.
- environment.DEST_DISK: The hard disk on which the operating system is deployed. The default is the first SCSI disk (/dev/sda), but can be changed for other disk types.
- environment.IMG_URL: The operating system tarball (ubuntu or other) to stream to the machine you are configuring.
- image: Container image needed to perform the steps needed by this action.
- timeout: Sets the amount of time (in seconds) that Tinkerbell has to stream the image, uncompress it, and write it to disk before timing out. Consider increasing this limit from the default 600 to a higher limit if this action is timing out.
Ubuntu-specific actions
template.tasks.actions.name.write-netplan (Ubuntu)
The write-netplan
action writes Ubuntu network configuration information to the machine (see Netplan
) for details. It identifies:
- environment.CONTENTS.network.version: Identifies the network version.
- environment.CONTENTS.network.renderer: Defines the service to manage networking. By default, the
networkd
systemd service is used.
- environment.CONTENTS.network.ethernets: Network interface to external network (eno1, by default) and whether or not to use dhcp4 (true, by default).
- environment.DEST_DISK: Destination block storage device partition where the operating system is copied. By default, /dev/sda2 is used (sda1 is the EFI partition).
- environment.DEST_PATH: File where the networking configuration is written (/etc/netplan/config.yaml, by default).
- environment.DIRMODE: Linux directory permissions bits to use when creating directories (0755, by default)
- environment.FS_TYPE: Type of filesystem on the partition (ext4, by default).
- environment.GID: The Linux group ID to set on file. Set to 0 (root group) by default.
- environment.MODE: The Linux permission bits to set on file (0644, by default).
- environment.UID: The Linux user ID to set on file. Set to 0 (root user) by default.
- image: Container image used to perform the steps needed by this action.
- timeout: Time needed to complete the action, in seconds.
template.tasks.actions.add-tink-cloud-init-config (Ubuntu)
The add-tink-cloud-init-config
action configures cloud-init features to further configure the operating system. See cloud-init Documentation
for details. It identifies:
- environment.CONTENTS.datasource: Identifies Ec2 (Ec2.metadata_urls) as the data source and sets
Ec2.strict_id: false
to prevent cloud-init from producing warnings about this datasource.
- environment.CONTENTS.system_info: Creates the
tink
user and gives it administrative group privileges (wheel, adm) and passwordless sudo privileges, and set the default shell (/bin/bash).
- environment.CONTENTS.manage_etc_hosts: Updates the system’s
/etc/hosts
file with the hostname. Set to localhost
by default.
- environment.CONTENTS.warnings: Sets dsid_missing_source to
off
.
- environment.DEST_DISK: Destination block storage device partition where the operating system is located (
/dev/sda2
, by default).
- environment.DEST_PATH: Location of the cloud-init configuration file on disk (
/etc/cloud/cloud.cfg.d/10_tinkerbell.cfg
, by default)
- environment.DIRMODE: Linux directory permissions bits to use when creating directories (0700, by default)
- environment.FS_TYPE: Type of filesystem on the partition (ext4, by default).
- environment.GID: The Linux group ID to set on file. Set to 0 (root group) by default.
- environment.MODE: The Linux permission bits to set on file (0600, by default).
- environment.UID: The Linux user ID to set on file. Set to 0 (root user) by default.
- image: Container image used to perform the steps needed by this action.
- timeout: Time needed to complete the action, in seconds.
template.tasks.actions.add-tink-cloud-init-ds-config (Ubuntu)
The add-tink-cloud-init-ds-config
action configures cloud-init data store features. This identifies the location of your metadata source once the machine is up and running. It identifies:
- environment.CONTENTS.datasource: Sets the datasource. Uses Ec2, by default.
- environment.DEST_DISK: Destination block storage device partition where the operating system is located (/dev/sda2, by default).
- environment.DEST_PATH: Location of the data store identity configuration file on disk (/etc/cloud/ds-identify.cfg, by default)
- environment.DIRMODE: Linux directory permissions bits to use when creating directories (0700, by default)
- environment.FS_TYPE: Type of filesystem on the partition (ext4, by default).
- environment.GID: The Linux group ID to set on file. Set to 0 (root group) by default.
- environment.MODE: The Linux permission bits to set on file (0600, by default).
- environment.UID: The Linux user ID to set on file. Set to 0 (root user) by default.
- image: Container image used to perform the steps needed by this action.
- timeout: Time needed to complete the action, in seconds.
template.tasks.actions.kexec-image (Ubuntu)
The kexec-image
action performs provisioning activities on the machine, then allows kexec to pivot the kernel to use the system installed on disk. This action identifies:
- environment.BLOCK_DEVICE: Disk partition on which the operating system is installed (/dev/sda2, by default)
- environment.FS_TYPE: Type of filesystem on the partition (ext4, by default).
- image: Container image used to perform the steps needed by this action.
- pid: Process ID. Set to host, by default.
- timeout: Time needed to complete the action, in seconds.
- volumes: Identifies mount points that need to be remounted to point to locations in the installed system.
There are known issues related to drivers with some hardware that may make it necessary to replace the kexec-image action with a full reboot.
If you require a full reboot, you can change the kexec-image setting as follows:
actions:
- name: "reboot"
image: public.ecr.aws/l0g8r8j6/tinkerbell/hub/reboot-action:latest
timeout: 90
volumes:
- /worker:/worker
Bottlerocket-specific actions
template.tasks.actions.write-bootconfig (Bottlerocket)
The write-bootconfig action identifies the location on the machine to put content needed to boot the system from disk.
- environment.BOOTCONFIG_CONTENTS.kernel: Add kernel parameters that are passed to the kernel when the system boots.
- environment.DEST_DISK: Identifies the block storage device that holds the boot partition.
- environment.DEST_PATH: Identifies the file holding boot configuration data (
/bootconfig.data
in this example).
- environment.DIRMODE: The Linux permissions assigned to the boot directory.
- environment.FS_TYPE: The filesystem type associated with the boot partition.
- environment.GID: The group ID associated with files and directories created on the boot partition.
- environment.MODE: The Linux permissions assigned to files in the boot partition.
- environment.UID: The user ID associated with files and directories created on the boot partition. UID 0 is the root user.
- image: Container image used to perform the steps needed by this action.
- timeout: Time needed to complete the action, in seconds.
template.tasks.actions.write-netconfig (Bottlerocket)
The write-netconfig action configures networking for the system.
- environment.CONTENTS: Add network values, including:
version = 1
(version number), [eno1]
(external network interface), dhcp4 = true
(turns on dhcp4), and primary = true
(identifies this interface as the primary interface used by kubelet).
- environment.DEST_DISK: Identifies the block storage device that holds the network configuration information.
- environment.DEST_PATH: Identifies the file holding network configuration data (
/net.toml
in this example).
- environment.DIRMODE: The Linux permissions assigned to the directory holding network configuration settings.
- environment.FS_TYPE: The filesystem type associated with the partition holding network configuration settings.
- environment.GID: The group ID associated with files and directories created on the partition. GID 0 is the root group.
- environment.MODE: The Linux permissions assigned to files in the partition.
- environment.UID: The user ID associated with files and directories created on the partition. UID 0 is the root user.
- image: Container image used to perform the steps needed by this action.
template.tasks.actions.write-user-data (Bottlerocket)
The write-user-data action configures the Tinkerbell Hegel service, which provides the metadata store for Tinkerbell.
- environment.HEGEL_URL: The IP address and port number of the Tinkerbell Hegel
service.
- environment.DEST_DISK: Identifies the block storage device that holds the network configuration information.
- environment.DEST_PATH: Identifies the file holding network configuration data (
/net.toml
in this example).
- environment.DIRMODE: The Linux permissions assigned to the directory holding network configuration settings.
- environment.FS_TYPE: The filesystem type associated with the partition holding network configuration settings.
- environment.GID: The group ID associated with files and directories created on the partition. GID 0 is the root group.
- environment.MODE: The Linux permissions assigned to files in the partition.
- environment.UID: The user ID associated with files and directories created on the partition. UID 0 is the root user.
- image: Container image used to perform the steps needed by this action.
- timeout: Time needed to complete the action, in seconds.
template.tasks.actions.reboot (Bottlerocket)
The reboot action defines how the system restarts to bring up the installed system.
- image: Container image used to perform the steps needed by this action.
- timeout: Time needed to complete the action, in seconds.
- volumes: The volume (directory) to mount into the container from the installed system.
version
Matches the current version of the Tinkerbell template.
Custom Tinkerbell action examples
By creating your own custom Tinkerbell actions, you can add to or modify the installed operating system so those changes take effect when the installed system first starts (from a reboot or pivot).
The following example shows how to add a .deb package (openssl
) to an Ubuntu installation:
- environment:
BLOCK_DEVICE: /dev/sda1
CHROOT: "y"
CMD_LINE: apt -y update && apt -y install openssl
DEFAULT_INTERPRETER: /bin/sh -c
FS_TYPE: ext4
image: public.ecr.aws/eks-anywhere/tinkerbell/hub/cexec:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
name: install-openssl
timeout: 90
The following shows an example of adding a new user (tinkerbell
) to an installed Ubuntu system:
- environment:
BLOCK_DEVICE: <block device path> # E.g. /dev/sda1
FS_TYPE: ext4
CHROOT: y
DEFAULT_INTERPRETER: "/bin/sh -c"
CMD_LINE: "useradd --password $(openssl passwd -1 tinkerbell) --shell /bin/bash --create-home --groups sudo tinkerbell"
image: public.ecr.aws/eks-anywhere/tinkerbell/hub/cexec:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
name: "create-user"
timeout: 90
Look for more examples as they are added to the Tinkerbell examples
page.
5.1.2 - Nutanix configuration
Full EKS Anywhere configuration reference for a Nutanix cluster.
This is a generic template with detailed descriptions below for reference.
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
name: mgmt
namespace: default
spec:
bundlesRef:
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
name: bundles-2
namespace: eksa-system
clusterNetwork:
cniConfig:
cilium: {}
pods:
cidrBlocks:
- 192.168.0.0/16
services:
cidrBlocks:
- 10.96.0.0/16
controlPlaneConfiguration:
count: 3
endpoint:
host: ""
machineGroupRef:
kind: NutanixMachineConfig
name: mgmt-cp-machine
datacenterRef:
kind: NutanixDatacenterConfig
name: nutanix-cluster
kubernetesVersion: "1.25"
workerNodeGroupConfigurations:
- count: 1
machineGroupRef:
kind: NutanixMachineConfig
name: mgmt-machine
name: md-0
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: NutanixDatacenterConfig
metadata:
name: nutanix-cluster
namespace: default
spec:
endpoint: pc01.cloud.internal
port: 9440
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: NutanixMachineConfig
metadata:
annotations:
anywhere.eks.amazonaws.com/control-plane: "true"
name: mgmt-cp-machine
namespace: default
spec:
cluster:
name: nx-cluster-01
type: name
image:
name: eksa-ubuntu-2004-kube-v1.25
type: name
memorySize: 4Gi
osFamily: ubuntu
subnet:
name: vm-network
type: name
systemDiskSize: 40Gi
project:
type: name
name: my-project
users:
- name: eksa
sshAuthorizedKeys:
- ssh-rsa AAAA…
vcpuSockets: 2
vcpusPerSocket: 1
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: NutanixMachineConfig
metadata:
name: mgmt-machine
namespace: default
spec:
cluster:
name: nx-cluster-01
type: name
image:
name: eksa-ubuntu-2004-kube-v1.25
type: name
memorySize: 4Gi
osFamily: ubuntu
subnet:
name: vm-network
type: name
systemDiskSize: 40Gi
project:
type: name
name: my-project
users:
- name: eksa
sshAuthorizedKeys:
- ssh-rsa AAAA…
vcpuSockets: 2
vcpusPerSocket: 1
---
Cluster Fields
name (required)
Name of your cluster mgmt
in this example.
clusterNetwork (required)
Specific network configuration for your Kubernetes cluster.
clusterNetwork.cniConfig (required)
CNI plugin configuration to be used in the cluster. The only supported configuration at the moment is cilium
.
clusterNetwork.cniConfig.cilium.policyEnforcementMode
Optionally, you may specify a policyEnforcementMode
of default
, always
, never
.
clusterNetwork.pods.cidrBlocks[0] (required)
Subnet used by pods in CIDR notation. Please note that only 1 custom pods CIDR block specification is permitted. This CIDR block should not conflict with the network subnet range selected for the VMs.
clusterNetwork.services.cidrBlocks[0] (required)
Subnet used by services in CIDR notation. Please note that only 1 custom services CIDR block specification is permitted. This CIDR block should not conflict with the network subnet range selected for the VMs.
controlPlaneConfiguration (required)
Specific control plane configuration for your Kubernetes cluster.
controlPlaneConfiguration.count (required)
Number of control plane nodes
controlPlaneConfiguration.machineGroupRef (required)
Refers to the Kubernetes object with Nutanix specific configuration for your nodes. See NutanixMachineConfig
fields below.
controlPlaneConfiguration.endpoint.host (required)
A unique IP you want to use for the control plane VM in your EKS Anywhere cluster. Choose an IP in your network range that does not conflict with other VMs.
NOTE: This IP should be outside the network DHCP range as it is a floating IP that gets assigned to one of
the control plane nodes for kube-apiserver loadbalancing. Suggestions on how to ensure this IP does not cause issues during cluster
creation process are here
.
workerNodeGroupConfigurations (required)
This takes in a list of node groups that you can define for your workers. You may define one or more worker node groups.
workerNodeGroupConfigurations.count
Number of worker nodes. Optional if autoscalingConfiguration
is used, in which case count will default to autoscalingConfiguration.minCount
.
workerNodeGroupConfigurations.machineGroupRef (required)
Refers to the Kubernetes object with Nutanix specific configuration for your nodes. See NutanixMachineConfig
fields below.
workerNodeGroupConfigurations.name (required)
Name of the worker node group (default: md-0
)
workerNodeGroupConfigurations.autoscalingConfiguration.minCount
Minimum number of nodes for this node group’s autoscaling configuration.
workerNodeGroupConfigurations.autoscalingConfiguration.maxCount
Maximum number of nodes for this node group’s autoscaling configuration.
datacenterRef
Refers to the Kubernetes object with Nutanix environment specific configuration. See NutanixDatacenterConfig
fields below.
kubernetesVersion (required)
The Kubernetes version you want to use for your cluster. Supported values: 1.25
, 1.24
, 1.23
, 1.22
, 1.21
NutanixDatacenterConfig Fields
endpoint (required)
The Prism Central server fully qualified domain name or IP address. If the server IP is used, the PC SSL certificate must have an IP SAN configured.
port (required)
The Prism Central server port. (Default: 9440
)
insecure (optional)
Set insecure to true
if the Prism Central server does not have a valid certificate. This is not recommended for production use cases. (Default: false
)
additionalTrustBundle (optional; required if using a self-signed PC SSL certificate)
The PEM encoded CA trust bundle.
The additionalTrustBundle
needs to be populated with the PEM-encoded x509 certificate of the Root CA that issued the certificate for Prism Central. Suggestions on how to obtain this certificate are here
.
Example:
additionalTrustBundle: |
-----BEGIN CERTIFICATE-----
<certificate string>
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
<certificate string>
-----END CERTIFICATE-----
NutanixMachineConfig Fields
cluster
Reference to the Prism Element cluster.
cluster.type
Type to identify the Prism Element cluster. (Permitted values: name
or uuid
)
cluster.name
Name of the Prism Element cluster.
cluster.uuid
UUID of the Prism Element cluster.
image
Reference to the OS image used for the system disk.
image.type
Type to identify the OS image. (Permitted values: name
or uuid
)
image.name (name
or UUID
required)
Name of the image
image.uuid (name
or UUID
required)
UUID of the image
memorySize
Size of RAM on virtual machines (Default: 4Gi
)
osFamily (optional)
Operating System on virtual machines. (Permitted values: ubuntu
)
subnet
Reference to the subnet to be assigned to the VMs.
subnet.name (name
or UUID
required)
Name of the subnet.
subnet.type
Type to identify the subnet. (Permitted values: name
or uuid
)
subnet.uuid (name
or UUID
required)
UUID of the subnet.
systemDiskSize
Amount of storage assigned to the system disk. (Default: 40Gi
)
vcpuSockets
Amount of vCPU sockets. (Default: 2
)
vcpusPerSocket
Amount of vCPUs per socket. (Default: 1
)
project (optional)
Reference to an existing project used for the virtual machines.
project.type
Type to identify the project. (Permitted values: name
or uuid
)
project.name (name
or UUID
required)
Name of the project
project.uuid (name
or UUID
required)
UUID of the project
users (optional)
The users you want to configure to access your virtual machines. Only one is permitted at this time.
users[0].name (optional)
The name of the user you want to configure to access your virtual machines through ssh.
The default is eksa
if osFamily=ubuntu
users[0].sshAuthorizedKeys (optional)
The SSH public keys you want to configure to access your virtual machines through ssh (as described below). Only 1 is supported at this time.
users[0].sshAuthorizedKeys[0] (optional)
This is the SSH public key that will be placed in authorized_keys
on all EKS Anywhere cluster VMs so you can ssh into
them. The user will be what is defined under name above. For example:
ssh -i <private-key-file> <user>@<VM-IP>
The default is generating a key in your $(pwd)/<cluster-name>
folder when not specifying a value
5.1.3 - Snow configuration
Full EKS Anywhere configuration reference for a AWS Snow cluster.
This is a generic template with detailed descriptions below for reference.
The following additional optional configuration can also be included:
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
name: my-cluster-name
spec:
clusterNetwork:
cniConfig:
cilium: {}
pods:
cidrBlocks:
- 10.1.0.0/16
services:
cidrBlocks:
- 10.96.0.0/12
controlPlaneConfiguration:
count: 3
endpoint:
host: ""
machineGroupRef:
kind: SnowMachineConfig
name: my-cluster-machines
datacenterRef:
kind: SnowDatacenterConfig
name: my-cluster-datacenter
externalEtcdConfiguration:
count: 3
machineGroupRef:
kind: SnowMachineConfig
name: my-cluster-machines
kubernetesVersion: "1.25"
workerNodeGroupConfigurations:
- count: 1
machineGroupRef:
kind: SnowMachineConfig
name: my-cluster-machines
name: md-0
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: SnowDatacenterConfig
metadata:
name: my-cluster-datacenter
spec: {}
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: SnowMachineConfig
metadata:
name: my-cluster-machines
spec:
amiID: ""
instanceType: sbe-c.large
sshKeyName: ""
osFamily: ubuntu
devices:
- ""
containersVolume:
size: 25
network:
directNetworkInterfaces:
- index: 1
primary: true
ipPoolRef:
kind: SnowIPPool
name: ip-pool-1
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: SnowIPPool
metadata:
name: ip-pool-1
spec:
pools:
- ipStart: 192.168.1.2
ipEnd: 192.168.1.14
subnet: 192.168.1.0/24
gateway: 192.168.1.1
- ipStart: 192.168.1.55
ipEnd: 192.168.1.250
subnet: 192.168.1.0/24
gateway: 192.168.1.1
Cluster Fields
name (required)
Name of your cluster my-cluster-name
in this example
clusterNetwork (required)
Specific network configuration for your Kubernetes cluster.
clusterNetwork.cniConfig (required)
CNI plugin configuration to be used in the cluster. The only supported configuration at the moment is cilium
.
clusterNetwork.cniConfig.cilium.policyEnforcementMode
Optionally, you may specify a policyEnforcementMode of default, always, never.
clusterNetwork.pods.cidrBlocks[0] (required)
Subnet used by pods in CIDR notation. Please note that only 1 custom pods CIDR block specification is permitted.
This CIDR block should not conflict with the network subnet range selected for the devices.
clusterNetwork.services.cidrBlocks[0] (required)
Subnet used by services in CIDR notation. Please note that only 1 custom services CIDR block specification is permitted.
This CIDR block should not conflict with the network subnet range selected for the devices.
clusterNetwork.dns.resolvConf.path (optional)
Path to the file with a custom DNS resolver configuration.
controlPlaneConfiguration (required)
Specific control plane configuration for your Kubernetes cluster.
controlPlaneConfiguration.count (required)
Number of control plane nodes
controlPlaneConfiguration.machineGroupRef (required)
Refers to the Kubernetes object with Snow specific configuration for your nodes. See SnowMachineConfig Fields
below.
controlPlaneConfiguration.endpoint.host (required)
A unique IP you want to use for the control plane VM in your EKS Anywhere cluster. Choose an IP in your network
range that does not conflict with other devices.
NOTE: This IP should be outside the network DHCP range as it is a floating IP that gets assigned to one of
the control plane nodes for kube-apiserver loadbalancing.
controlPlaneConfiguration.taints
A list of taints to apply to the control plane nodes of the cluster.
Replaces the default control plane taint. For k8s versions prior to 1.24, it replaces node-role.kubernetes.io/master
. For k8s versions 1.24+, it replaces node-role.kubernetes.io/control-plane
. The default control plane components will tolerate the provided taints.
Modifying the taints associated with the control plane configuration will cause new nodes to be rolled-out, replacing the existing nodes.
NOTE: The taints provided will be used instead of the default control plane taint.
Any pods that you run on the control plane nodes must tolerate the taints you provide in the control plane configuration.
controlPlaneConfiguration.labels
A list of labels to apply to the control plane nodes of the cluster. This is in addition to the labels that
EKS Anywhere will add by default.
Modifying the labels associated with the control plane configuration will cause new nodes to be rolled out, replacing
the existing nodes.
workerNodeGroupConfigurations (required)
This takes in a list of node groups that you can define for your workers.
You may define one or more worker node groups.
workerNodeGroupConfigurations.count
Number of worker nodes. Optional if autoscalingConfiguration is used, in which case count will default to autoscalingConfiguration.minCount
.
workerNodeGroupConfigurations.machineGroupRef (required)
Refers to the Kubernetes object with Snow specific configuration for your nodes. See SnowMachineConfig Fields
below.
workerNodeGroupConfigurations.name (required)
Name of the worker node group (default: md-0)
workerNodeGroupConfigurations.autoscalingConfiguration.minCount
Minimum number of nodes for this node group’s autoscaling configuration.
workerNodeGroupConfigurations.autoscalingConfiguration.maxCount
Maximum number of nodes for this node group’s autoscaling configuration.
workerNodeGroupConfigurations.taints
A list of taints to apply to the nodes in the worker node group.
Modifying the taints associated with a worker node group configuration will cause new nodes to be rolled-out, replacing the existing nodes associated with the configuration.
At least one node group must not have NoSchedule
or NoExecute
taints applied to it.
workerNodeGroupConfigurations.labels
A list of labels to apply to the nodes in the worker node group. This is in addition to the labels that
EKS Anywhere will add by default.
Modifying the labels associated with a worker node group configuration will cause new nodes to be rolled out, replacing
the existing nodes associated with the configuration.
externalEtcdConfiguration.count
Number of etcd members.
externalEtcdConfiguration.machineGroupRef
Refers to the Kubernetes object with Snow specific configuration for your etcd members. See SnowMachineConfig Fields
below.
datacenterRef
Refers to the Kubernetes object with Snow environment specific configuration. See SnowDatacenterConfig Fields
below.
kubernetesVersion (required)
The Kubernetes version you want to use for your cluster. Supported values: 1.25
, 1.24
, 1.23
, 1.22
, 1.21
SnowDatacenterConfig Fields
identityRef
Refers to the Kubernetes secret object with Snow devices credentials used to reconcile the cluster.
SnowMachineConfig Fields
amiID (optional)
AMI ID from which to create the machine instance. Snow provider offers an AMI lookup logic which will look for a suitable AMI ID based on the Kubernetes version and osFamily if the field is empty.
instanceType (optional)
Type of the Snow EC2 machine instance. See Quotas for Compute Instances on a Snowball Edge Device
for supported instance types on Snow (Default: sbe-c.large
).
osFamily
Operating System on instance machines. Permitted value: ubuntu.
physicalNetworkConnector (optional)
Type of snow physical network connector to use for creating direct network interfaces. Permitted values: SFP_PLUS
, QSFP
, RJ45
(Default: SFP_PLUS
).
sshKeyName (optional)
Name of the AWS Snow SSH key pair you want to configure to access your machine instances.
The default is eksa-default-{cluster-name}-{uuid}
.
devices
A device IP list from which to bootstrap and provision machine instances.
network
Custom network setting for the machine instances. DHCP and static IP configurations are supported.
network.directNetworkInterfaces[0].index (optional)
Index number of a direct network interface (DNI) used to clarify the position in the list. Must be no smaller than 1 and no greater than 8.
network.directNetworkInterfaces[0].primary (optional)
Whether the DNI is primary or not. One and only one primary DNI is required in the directNetworkInterfaces list.
network.directNetworkInterfaces[0].vlanID (optional)
VLAN ID to use for the DNI.
network.directNetworkInterfaces[0].dhcp (optional)
Whether DHCP is to be used to assign IP for the DNI.
network.directNetworkInterfaces[0].ipPoolRef (optional)
Refers to a SnowIPPool
object which provides a range of ip addresses. When specified, an IP address selected from the pool will be allocated to the DNI.
containersVolume (optional)
Configuration option for customizing containers data storage volume.
containersVolume.size
Size of the storage for containerd runtime in Gi.
The field is optional for Ubuntu and if specified, the size must be no smaller than 8 Gi.
SnowIPPool Fields
pools[0].ipStart
Start address of an IP range.
pools[0].ipEnd
End address of an IP range.
pools[0].subnet
An IP subnet for determining whether an IP is within the subnet.
pools[0].gateway
Gateway of the subnet for routing purpose.
5.1.4 - vSphere configuration
Full EKS Anywhere configuration reference for a VMware vSphere cluster.
This is a generic template with detailed descriptions below for reference.
Key: Provider-specific values are in red ; Resources are in green ; Links to field descriptions are in blue
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
name: my-cluster-name # Name of the cluster (required)
spec:
clusterNetwork: # Cluster network configuration (required)
cniConfig: # Cluster CNI plugin - default: cilium (required)
cilium: {}
pods:
cidrBlocks: # Subnet CIDR notation for pods (required)
- 192.168.0.0/16
services:
cidrBlocks: # Subnet CIDR notation for services (required)
- 10.96.0.0/12
controlPlaneConfiguration: # Specific cluster control plane config (required)
count: 2 # Number of control plane nodes (required)
endpoint: # IP for control plane endpoint (required)
host: "192.168.0.10"
machineGroupRef: # vSphere-specific Kubernetes node config (required)
kind: VSphereMachineConfig
name: my-cluster-machines
taints: # Taints applied to control plane nodes
- key: "key1"
value: "value1"
effect: "NoSchedule"
labels: # Labels applied to control plane nodes
"key1": "value1"
"key2": "value2"
datacenterRef: # Kubernetes object with vSphere-specific config
kind: VSphereDatacenterConfig
name: my-cluster-datacenter
externalEtcdConfiguration:
count: 3 # Number of etcd members
machineGroupRef: # vSphere-specific Kubernetes etcd config
kind: VSphereMachineConfig
name: my-cluster-machines
kubernetesVersion: "1.25" # Kubernetes version to use for the cluster (required)
workerNodeGroupConfigurations: # List of node groups you can define for workers (required)
- count: 2 # Number of worker nodes
machineGroupRef: # vSphere-specific Kubernetes node objects (required)
kind: VSphereMachineConfig
name: my-cluster-machines
name: md-0 # Name of the worker nodegroup (required)
taints: # Taints to apply to worker node group nodes
- key: "key1"
value: "value1"
effect: "NoSchedule"
labels: # Labels to apply to worker node group nodes
"key1": "value1"
"key2": "value2"
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: VSphereDatacenterConfig
metadata:
name: my-cluster-datacenter
spec:
datacenter: "datacenter1" # vSphere datacenter name on which to deploy EKS Anywhere (required)
disableCSI: false # Set to true to not have EKS Anywhere install and manage vSphere CSI driver
server: "myvsphere.local" # FQDN or IP address of vCenter server (required)
network: "network1" # Path to the VM network on which to deploy EKS Anywhere (required)
insecure: false # Set to true if vCenter does not have a valid certificate
thumbprint: "1E:3B:A1:4C:B2:..." # SHA1 thumprint of vCenter server certificate (required if insecure=false)
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: VSphereMachineConfig
metadata:
name: my-cluster-machines
spec:
diskGiB: 25 # Size of disk on VMs, if no snapshots
datastore: "datastore1" # Path to vSphere datastore to deploy EKS Anywhere on (required)
folder: "folder1" # Path to VM folder for EKS Anywhere cluster VMs (required)
numCPUs: 2 # Number of CPUs on virtual machines
memoryMiB: 8192 # Size of RAM on VMs
osFamily: "bottlerocket" # Operating system on VMs
resourcePool: "resourcePool1" # vSphere resource pool for EKS Anywhere VMs (required)
storagePolicyName: "storagePolicy1" # Storage policy name associated with VMs
template: "bottlerocket-kube-v1-25" # VM template for EKS Anywhere (required for RHEL/Ubuntu-based OVAs)
users: # Add users to access VMs via SSH
- name: "ec2-user" # Name of each user set to access VMs
sshAuthorizedKeys: # SSH keys for user needed to access VMs
- "ssh-rsa AAAAB3NzaC1yc2E..."
tags: # List of tags to attach to cluster VMs, in URN format
- "urn:vmomi:InventoryServiceTag:5b3e951f-4e1d-4511-95b1-5ba1ea97245c:GLOBAL"
- "urn:vmomi:InventoryServiceTag:cfee03d0-0189-4f27-8c65-fe75086a86cd:GLOBAL"
The following additional optional configuration can also be included:
Cluster Fields
name (required)
Name of your cluster my-cluster-name
in this example
clusterNetwork (required)
Specific network configuration for your Kubernetes cluster.
clusterNetwork.cniConfig (required)
CNI plugin configuration to be used in the cluster. The only supported configuration at the moment is cilium
.
clusterNetwork.cniConfig.cilium.policyEnforcementMode
Optionally, you may specify a policyEnforcementMode of default, always, never.
clusterNetwork.pods.cidrBlocks[0] (required)
Subnet used by pods in CIDR notation. Please note that only 1 custom pods CIDR block specification is permitted.
This CIDR block should not conflict with the network subnet range selected for the VMs.
clusterNetwork.services.cidrBlocks[0] (required)
Subnet used by services in CIDR notation. Please note that only 1 custom services CIDR block specification is permitted.
This CIDR block should not conflict with the network subnet range selected for the VMs.
clusterNetwork.dns.resolvConf.path (optional)
Path to the file with a custom DNS resolver configuration.
controlPlaneConfiguration (required)
Specific control plane configuration for your Kubernetes cluster.
controlPlaneConfiguration.count (required)
Number of control plane nodes
controlPlaneConfiguration.machineGroupRef (required)
Refers to the Kubernetes object with vsphere specific configuration for your nodes. See VSphereMachineConfig Fields
below.
controlPlaneConfiguration.endpoint.host (required)
A unique IP you want to use for the control plane VM in your EKS Anywhere cluster. Choose an IP in your network
range that does not conflict with other VMs.
NOTE: This IP should be outside the network DHCP range as it is a floating IP that gets assigned to one of
the control plane nodes for kube-apiserver loadbalancing. Suggestions on how to ensure this IP does not cause issues during cluster
creation process are here
controlPlaneConfiguration.taints
A list of taints to apply to the control plane nodes of the cluster.
Replaces the default control plane taint. For k8s versions prior to 1.24, it replaces node-role.kubernetes.io/master
. For k8s versions 1.24+, it replaces node-role.kubernetes.io/control-plane
. The default control plane components will tolerate the provided taints.
Modifying the taints associated with the control plane configuration will cause new nodes to be rolled-out, replacing the existing nodes.
NOTE: The taints provided will be used instead of the default control plane taint.
Any pods that you run on the control plane nodes must tolerate the taints you provide in the control plane configuration.
controlPlaneConfiguration.labels
A list of labels to apply to the control plane nodes of the cluster. This is in addition to the labels that
EKS Anywhere will add by default.
Modifying the labels associated with the control plane configuration will cause new nodes to be rolled out, replacing
the existing nodes.
workerNodeGroupConfigurations (required)
This takes in a list of node groups that you can define for your workers.
You may define one or more worker node groups.
workerNodeGroupConfigurations.count
Number of worker nodes. Optional if autoscalingConfiguration is used, in which case count will default to autoscalingConfiguration.minCount
.
workerNodeGroupConfigurations.machineGroupRef (required)
Refers to the Kubernetes object with vsphere specific configuration for your nodes. See VSphereMachineConfig Fields
below.
workerNodeGroupConfigurations.name (required)
Name of the worker node group (default: md-0)
workerNodeGroupConfigurations.autoscalingConfiguration.minCount
Minimum number of nodes for this node group’s autoscaling configuration.
workerNodeGroupConfigurations.autoscalingConfiguration.maxCount
Maximum number of nodes for this node group’s autoscaling configuration.
workerNodeGroupConfigurations.taints
A list of taints to apply to the nodes in the worker node group.
Modifying the taints associated with a worker node group configuration will cause new nodes to be rolled-out, replacing the existing nodes associated with the configuration.
At least one node group must not have NoSchedule
or NoExecute
taints applied to it.
workerNodeGroupConfigurations.labels
A list of labels to apply to the nodes in the worker node group. This is in addition to the labels that
EKS Anywhere will add by default.
Modifying the labels associated with a worker node group configuration will cause new nodes to be rolled out, replacing
the existing nodes associated with the configuration.
externalEtcdConfiguration.count
Number of etcd members
externalEtcdConfiguration.machineGroupRef
Refers to the Kubernetes object with vsphere specific configuration for your etcd members. See VSphereMachineConfig Fields
below.
datacenterRef
Refers to the Kubernetes object with vsphere environment specific configuration. See VSphereDatacenterConfig Fields
below.
kubernetesVersion (required)
The Kubernetes version you want to use for your cluster. Supported values: 1.25
, 1.24
, 1.23
, 1.22
, 1.21
VSphereDatacenterConfig Fields
datacenter (required)
The name of the vSphere datacenter to deploy the EKS Anywhere cluster on. For example SDDC-Datacenter
.
network (required)
The path to the VM network to deploy your EKS Anywhere cluster on. For example, /<DATACENTER>/network/<NETWORK_NAME>
.
Use govc find -type n
to see a list of networks.
server (required)
The vCenter server fully qualified domain name or IP address. If the server IP is used, the thumbprint
must be set
or insecure
must be set to true.
insecure (optional)
Set insecure to true
if the vCenter server does not have a valid certificate. (Default: false)
thumbprint (required if insecure=false)
The SHA1 thumbprint of the vCenter server certificate which is only required if you have a self signed certificate.
There are several ways to obtain your vCenter thumbprint. The easiest way is if you have govc
installed, you
can run:
govc about.cert -thumbprint -k
Another way is from the vCenter web UI, go to Administration/Certificate Management and click view details of the
machine certificate. The format of this thumbprint does not exactly match the format required though and you will
need to add :
to separate each hexadecimal value.
Another way to get the thumbprint is use this command with your servers certificate in a file named ca.crt
:
openssl x509 -sha1 -fingerprint -in ca.crt -noout
If you specify the wrong thumbprint, an error message will be printed with the expected thumbprint. If no valid
certificate is being used, insecure
must be set to true.
disableCSI (optional)
Set disableCSI
to true
if you don’t want to have EKS Anywhere install and manage the vSphere CSI driver for you.
More details on the driver are here
NOTE: If you upgrade a cluster and disable the vSphere CSI driver after it has already been installed by EKS Anywhere,
you will need to remove the resources manually from the cluster. Delete the DaemonSet
and Deployment
first, as they
rely on the other resources. This should be done after setting disableCSI
to true
and running upgrade cluster
.
These are the resources you would need to delete:
vsphere-csi-controller-role
(kind: ClusterRole)
vsphere-csi-controller-binding
(kind: ClusterRoleBinding)
csi.vsphere.vmware.com
(kind: CSIDriver)
These are the resources you would need to delete
in the kube-system
namespace:
vsphere-csi-controller
(kind: ServiceAccount)
csi-vsphere-config
(kind: Secret)
vsphere-csi-node
(kind: DaemonSet)
vsphere-csi-controller
(kind: Deployment)
These are the resources you would need to delete
in the eksa-system
namespace from the management cluster.
<cluster-name>-csi
(kind: ClusterResourceSet)
Note: If your cluster is self-managed, you would delete <cluster-name>-csi
(kind: ClusterResourceSet) from the same cluster.
VSphereMachineConfig Fields
memoryMiB (optional)
Size of RAM on virtual machines (Default: 8192)
numCPUs (optional)
Number of CPUs on virtual machines (Default: 2)
osFamily (optional)
Operating System on virtual machines. Permitted values: bottlerocket, ubuntu, redhat (Default: bottlerocket)
diskGiB (optional)
Size of disk on virtual machines if snapshots aren’t included (Default: 25)
users (optional)
The users you want to configure to access your virtual machines. Only one is permitted at this time
users[0].name (optional)
The name of the user you want to configure to access your virtual machines through ssh.
The default is ec2-user
if osFamily=bottlrocket
and capv
if osFamily=ubuntu
users[0].sshAuthorizedKeys (optional)
The SSH public keys you want to configure to access your virtual machines through ssh (as described below). Only 1 is supported at this time.
users[0].sshAuthorizedKeys[0] (optional)
This is the SSH public key that will be placed in authorized_keys
on all EKS Anywhere cluster VMs so you can ssh into
them. The user will be what is defined under name above. For example:
ssh -i <private-key-file> <user>@<VM-IP>
The default is generating a key in your $(pwd)/<cluster-name>
folder when not specifying a value
template (optional)
The VM template to use for your EKS Anywhere cluster. This template was created when you
imported the OVA file into vSphere
.
This is a required field if you are using Ubuntu-based or RHEL-based OVAs.
datastore (required)
The path to the vSphere datastore
to deploy your EKS Anywhere cluster on, for example /<DATACENTER>/datastore/<DATASTORE_NAME>
.
Use govc find -type s
to get a list of datastores.
folder (required)
The path to a VM folder for your EKS Anywhere cluster VMs. This allows you to organize your VMs. If the folder does not exist,
it will be created for you. If the folder is blank, the VMs will go in the root folder.
For example /<DATACENTER>/vm/<FOLDER_NAME>/...
.
Use govc find -type f
to get a list of existing folders.
resourcePool (required)
The vSphere Resource pools
for your VMs in the EKS Anywhere cluster. Examples of resource pool values include:
- If there is no resource pool:
/<datacenter>/host/<cluster-name>/Resources
- If there is a resource pool:
/<datacenter>/host/<cluster-name>/Resources/<resource-pool-name>
- The wild card option
*/Resources
also often works.
Use govc find -type p
to get a list of available resource pools.
storagePolicyName (optional)
The storage policy name associated with your VMs. Generally this can be left blank.
Use govc storage.policy.ls
to get a list of available storage policies.
Optional list of tags to attach to your cluster VMs in the URN format.
Example:
tags:
- urn:vmomi:InventoryServiceTag:8e0ce079-0675-47d6-8665-16ada4e6dabd:GLOBAL
Optional VSphere Credentials
Use the following environment variables to configure Cloud Provider and CSI Driver with different credentials.
EKSA_VSPHERE_CP_USERNAME
Username for Cloud Provider (Default: $EKSA_VSPHERE_USERNAME).
EKSA_VSPHERE_CP_PASSWORD
Password for Cloud Provider (Default: $EKSA_VSPHERE_PASSWORD).
EKSA_VSPHERE_CSI_USERNAME
Username for CSI Driver (Default: $EKSA_VSPHERE_USERNAME).
EKSA_VSPHERE_CSI_PASSWORD
Password for CSI Driver (Default: $EKSA_VSPHERE_PASSWORD).
5.1.5 - CloudStack configuration
Full EKS Anywhere configuration reference for a CloudStack cluster.
This is a generic template with detailed descriptions below for reference.
The following additional optional configuration can also be included:
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
name: my-cluster-name
spec:
clusterNetwork:
cniConfig:
cilium: {}
pods:
cidrBlocks:
- 192.168.0.0/16
services:
cidrBlocks:
- 10.96.0.0/12
controlPlaneConfiguration:
count: 3
endpoint:
host: ""
machineGroupRef:
kind: CloudStackMachineConfig
name: my-cluster-name-cp
taints:
- key: ""
value: ""
effect: ""
labels:
"<key1>": ""
"<key2>": ""
datacenterRef:
kind: CloudStackDatacenterConfig
name: my-cluster-name
externalEtcdConfiguration:
count: 3
machineGroupRef:
kind: CloudStackMachineConfig
name: my-cluster-name-etcd
kubernetesVersion: "1.23"
managementCluster:
name: my-cluster-name
workerNodeGroupConfigurations:
- count: 2
machineGroupRef:
kind: CloudStackMachineConfig
name: my-cluster-name
taints:
- key: ""
value: ""
effect: ""
labels:
"<key1>": ""
"<key2>": ""
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: CloudStackDatacenterConfig
metadata:
name: my-cluster-name-datacenter
spec:
availabilityZones:
- account: admin
credentialsRef: global
domain: domain1
managementApiEndpoint: ""
name: az-1
zone:
name: zone1
network:
name: "net1"
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: CloudStackMachineConfig
metadata:
name: my-cluster-name-cp
spec:
computeOffering:
name: "m4-large"
users:
- name: capc
sshAuthorizedKeys:
- ssh-rsa AAAA...
template:
name: "rhel8-k8s-118"
diskOffering:
name: "Small"
mountPath: "/data-small"
device: "/dev/vdb"
filesystem: "ext4"
label: "data_disk"
symlinks:
/var/log/kubernetes: /data-small/var/log/kubernetes
affinityGroupIds:
- control-plane-anti-affinity
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: CloudStackMachineConfig
metadata:
name: my-cluster-name
spec:
computeOffering:
name: "m4-large"
users:
- name: capc
sshAuthorizedKeys:
- ssh-rsa AAAA...
template:
name: "rhel8-k8s-118"
diskOffering:
name: "Small"
mountPath: "/data-small"
device: "/dev/vdb"
filesystem: "ext4"
label: "data_disk"
symlinks:
/var/log/pods: /data-small/var/log/pods
/var/log/containers: /data-small/var/log/containers
affinityGroupIds:
- worker-affinity
userCustomDetails:
foo: bar
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: CloudStackMachineConfig
metadata:
name: my-cluster-name-etcd
spec:
computeOffering: {}
name: "m4-large"
users:
- name: "capc"
sshAuthorizedKeys:
- "ssh-rsa AAAAB3N...
template:
name: "rhel8-k8s-118"
diskOffering:
name: "Small"
mountPath: "/data-small"
device: "/dev/vdb"
filesystem: "ext4"
label: "data_disk"
symlinks:
/var/lib: /data-small/var/lib
affinityGroupIds:
- etcd-affinity
---
Cluster Fields
name (required)
Name of your cluster my-cluster-name
in this example
clusterNetwork (required)
Specific network configuration for your Kubernetes cluster.
clusterNetwork.cniConfig (required)
CNI plugin configuration to be used in the cluster. The only supported configuration at the moment is cilium
.
clusterNetwork.cniConfig.cilium.policyEnforcementMode
Optionally, you may specify a policyEnforcementMode of default, always, never.
clusterNetwork.pods.cidrBlocks[0] (required)
Subnet used by pods in CIDR notation. Please note that only 1 custom pods CIDR block specification is permitted.
This CIDR block should not conflict with the network subnet range selected for the VMs.
clusterNetwork.services.cidrBlocks[0] (required)
Subnet used by services in CIDR notation. Please note that only 1 custom services CIDR block specification is permitted.
This CIDR block should not conflict with the network subnet range selected for the VMs.
controlPlaneConfiguration (required)
Specific control plane configuration for your Kubernetes cluster.
controlPlaneConfiguration.count (required)
Number of control plane nodes
controlPlaneConfiguration.endpoint.host (required)
A unique IP you want to use for the control plane VM in your EKS Anywhere cluster. Choose an IP in your network
range that does not conflict with other VMs.
NOTE: This IP should be outside the network DHCP range as it is a floating IP that gets assigned to one of
the control plane nodes for kube-apiserver loadbalancing. Suggestions on how to ensure this IP does not cause issues during cluster
creation process are here
controlPlaneConfiguration.machineGroupRef (required)
Refers to the Kubernetes object with CloudStack specific configuration for your nodes. See CloudStackMachineConfig Fields
below.
controlPlaneConfiguration.taints
A list of taints to apply to the control plane nodes of the cluster.
Replaces the default control plane taint, node-role.kubernetes.io/master
. The default control plane components will tolerate the provided taints.
Modifying the taints associated with the control plane configuration will cause new nodes to be rolled-out, replacing the existing nodes.
NOTE: The taints provided will be used instead of the default control plane taint node-role.kubernetes.io/master
.
Any pods that you run on the control plane nodes must tolerate the taints you provide in the control plane configuration.
controlPlaneConfiguration.labels
A list of labels to apply to the control plane nodes of the cluster. This is in addition to the labels that
EKS Anywhere will add by default.
A special label value is supported by the CAPC provider:
labels:
cluster.x-k8s.io/failure-domain: ds.meta_data.failuredomain
The ds.meta_data.failuredomain
value will be replaced with a failuredomain name where the node is deployed, such as az-1
.
Modifying the labels associated with the control plane configuration will cause new nodes to be rolled out, replacing
the existing nodes.
datacenterRef
Refers to the Kubernetes object with CloudStack environment specific configuration. See CloudStackDatacenterConfig Fields
below.
externalEtcdConfiguration.count
Number of etcd members
externalEtcdConfiguration.machineGroupRef
Refers to the Kubernetes object with CloudStack specific configuration for your etcd members. See CloudStackMachineConfig Fields
below.
kubernetesVersion (required)
The Kubernetes version you want to use for your cluster. Supported values: 1.24
, 1.23
, 1.22
, 1.21
managementCluster (required)
Identifies the name of the management cluster.
If this is a standalone cluster or if it were serving as the management cluster for other workload clusters, this will be the same as the cluster name.
workerNodeGroupConfigurations (required)
This takes in a list of node groups that you can define for your workers.
You may define one or more worker node groups.
workerNodeGroupConfigurations.count
Number of worker nodes. Optional if autoscalingConfiguration is used, in which case count will default to autoscalingConfiguration.minCount
.
workerNodeGroupConfigurations.machineGroupRef (required)
Refers to the Kubernetes object with CloudStack specific configuration for your nodes. See CloudStackMachineConfig Fields
below.
workerNodeGroupConfigurations.name (required)
Name of the worker node group (default: md-0)
workerNodeGroupConfigurations.autoscalingConfiguration.minCount
Minimum number of nodes for this node group’s autoscaling configuration.
workerNodeGroupConfigurations.autoscalingConfiguration.maxCount
Maximum number of nodes for this node group’s autoscaling configuration.
workerNodeGroupConfigurations.taints
A list of taints to apply to the nodes in the worker node group.
Modifying the taints associated with a worker node group configuration will cause new nodes to be rolled-out, replacing the existing nodes associated with the configuration.
At least one node group must not have NoSchedule
or NoExecute
taints applied to it.
workerNodeGroupConfigurations.labels
A list of labels to apply to the nodes in the worker node group. This is in addition to the labels that
EKS Anywhere will add by default.
A special label value is supported by the CAPC provider:
labels:
cluster.x-k8s.io/failure-domain: ds.meta_data.failuredomain
The ds.meta_data.failuredomain
value will be replaced with a failuredomain name where the node is deployed, such as az-1
.
Modifying the labels associated with a worker node group configuration will cause new nodes to be rolled out, replacing
the existing nodes associated with the configuration.
CloudStackDatacenterConfig
availabilityZones.account (optional)
Account used to access CloudStack.
As long as you pass valid credentials, through availabilityZones.credentialsRef
, this value is not required.
availabilityZones.credentialsRef (required)
If you passed credentials through the environment variable EKSA_CLOUDSTACK_B64ENCODED_SECRET
noted in Create CloudStack production cluster
, you can identify those credentials here.
For that example, you would use the profile name global
.
You can instead use a previously created secret on the Kubernetes cluster in the eksa-system
namespace.
availabilityZones.domain (optional)
CloudStack domain to deploy the cluster. The default is ROOT
.
availabilityZones.managementApiEndpoint (required)
Location of the CloudStack API management endpoint. For example, http://10.11.0.2:8080/client/api
.
availabilityZones.{id,name} (required)
Name or ID of the CloudStack zone on which to deploy the cluster.
availabilityZones.zone.network.{id,name} (required)
CloudStack network name or ID to use with the cluster.
CloudStackMachineConfig
In the example above, there are separate CloudStackMachineConfig
sections for the control plane (my-cluster-name-cp
), worker (my-cluster-name
) and etcd (my-cluster-name-etcd
) nodes.
computeOffering.{id,name} (required)
Name or ID of the CloudStack compute instance.
users[0].name (optional)
The name of the user you want to configure to access your virtual machines through ssh.
You can add as many users object as you want.
The default is capc
.
users[0].sshAuthorizedKeys (optional)
The SSH public keys you want to configure to access your virtual machines through ssh (as described below). Only 1 is supported at this time.
users[0].sshAuthorizedKeys[0] (optional)
This is the SSH public key that will be placed in authorized_keys
on all EKS Anywhere cluster VMs so you can ssh into
them. The user will be what is defined under name above. For example:
ssh -i <private-key-file> <user>@<VM-IP>
The default is generating a key in your $(pwd)/<cluster-name>
folder when not specifying a value.
template.{id,name} (required)
The VM template to use for your EKS Anywhere cluster. Currently, a VM based on RHEL 8.6 is required.
This can be a name or ID.
See the Artifacts
page for instructions for building RHEL-based images.
diskOffering (optional)
Name representing a disk you want to mount into nodes for this CloudStackMachineConfig
diskOffering.mountPath (optional)
Mount point on which to mount the disk.
diskOffering.device (optional)
Device name of the disk partition to mount.
diskOffering.filesystem (optional)
File system type used to format the filesystem on the disk.
diskOffering.label (optional)
Label to apply to the disk partition.
symlinks (optional)
Symbolic link of a directory or file you want to mount from the host filesystem to the mounted filesystem.
userCustomDetails (optional)
Add key/value pairs to nodes in a CloudStackMachineConfig
.
These can be used for things like identifying sets of nodes that you want to add to a security group that opens selected ports.
affinityGroupIDs (optional)
Group ID to attach to the set of host systems to indicate how affinity is done for services on those systems.
affinity (optional)
Allows you to set pro
and anti
affinity for the CloudStackMachineConfig
.
This can be used in a mutually exclusive fashion with the affinityGroupIDs field.
5.1.6 - Optional configuration
Config reference to optional features for EKS Anywhere clusters
5.1.6.1 - Autoscaling configuration
EKS Anywhere cluster yaml autoscaling configuration specification reference
Cluster Autoscaling (Optional)
Cluster Autoscaler configuration in EKS Anywhere cluster spec
EKS Anywhere supports autoscaling worker node groups using the Kubernetes Cluster Autoscaler
’s clusterapi
cloudProvider.
- Configure a worker node group to be picked up by a cluster autoscaler deployment by adding a
autoscalingConfiguration
block to the workerNodeGroupConfiguration
:
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
name: my-cluster-name
spec:
workerNodeGroupConfigurations:
- autoscalingConfiguration:
minCount: 1
maxCount: 5
machineGroupRef:
kind: VSphereMachineConfig
name: worker-machine-a
name: md-0
- count: 1
autoscalingConfiguration:
minCount: 1
maxCount: 3
machineGroupRef:
kind: VSphereMachineConfig
name: worker-machine-b
name: md-1
Note that if no count
is specified it will default to the minCount
value.
EKS Anywhere will automatically apply the following annotations to your MachineDeployment objects:
cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size: <minCount>
cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size: <maxCount>
After deploying the Kubernetes Cluster Autoscaler from upstream or as a curated package
, the deployment will pick up your MachineDeployment and scale the nodes as per your min and max count values.
Cluster Autoscaler Deployment Topologies
The Kubernetes Cluster Autoscaler can only scale a single cluster per deployment.
This means that each cluster you want to scale will need its own cluster autoscaler deployment.
We support three deployment topologies:
- Cluster Autoscaler deployed in the management cluster to autoscale the management cluster itself
- Cluster Autoscaler deployed in the management cluster to autoscale a remote workload cluster
- Cluster Autoscaler deployed in the workload cluster to autoscale the workload cluster itself
If your cluster architecture supports management clusters with resources to run additional workloads, you may want to consider using deployment topologies (1) and (2). Instructions for using this topology can be found here
.
If your deployment topology runs small management clusters though, you may want to follow deployment topology (3) and deploy the cluster autoscaler to run in a workload cluster
.
5.1.6.2 - CNI plugin configuration
EKS Anywhere cluster yaml cni plugin specification reference
Specifying CNI Plugin in EKS Anywhere cluster spec
EKS Anywhere currently supports two CNI plugins: Cilium and Kindnet. Only one of them can be selected
for a cluster, and the plugin cannot be changed once the cluster is created.
Up until the 0.7.x releases, the plugin had to be specified using the cni
field on cluster spec.
Starting with release 0.8, the plugin should be specified using the new cniConfig
field as follows:
-
For selecting Cilium as the CNI plugin:
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
name: my-cluster-name
spec:
clusterNetwork:
pods:
cidrBlocks:
- 192.168.0.0/16
services:
cidrBlocks:
- 10.96.0.0/12
cniConfig:
cilium: {}
EKS Anywhere selects this as the default plugin when generating a cluster config.
-
Or for selecting Kindnetd as the CNI plugin:
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
name: my-cluster-name
spec:
clusterNetwork:
pods:
cidrBlocks:
- 192.168.0.0/16
services:
cidrBlocks:
- 10.96.0.0/12
cniConfig:
kindnetd: {}
NOTE: EKS Anywhere allows specifying only 1 plugin for a cluster and does not allow switching the plugins
after the cluster is created.
Policy Configuration options for Cilium plugin
Cilium accepts policy enforcement modes from the users to determine the allowed traffic between pods.
The allowed values for this mode are: default
, always
and never
.
Please refer the official Cilium documentation
for more details on how each mode affects
the communication within the cluster and choose a mode accordingly.
You can choose to not set this field so that cilium will be launched with the default
mode.
Starting release 0.8, Cilium’s policy enforcement mode can be set through the cluster spec
as follows:
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
name: my-cluster-name
spec:
clusterNetwork:
pods:
cidrBlocks:
- 192.168.0.0/16
services:
cidrBlocks:
- 10.96.0.0/12
cniConfig:
cilium:
policyEnforcementMode: "always"
Please note that if the always
mode is selected, all communication between pods is blocked unless
NetworkPolicy objects allowing communication are created.
In order to ensure that the cluster gets created successfully, EKS Anywhere will create the required
NetworkPolicy objects for all its core components. But it is up to the user to create the NetworkPolicy
objects needed for the user workloads once the cluster is created.
Network policies created by EKS Anywhere for “always” mode
As mentioned above, if Cilium is configured with policyEnforcementMode
set to always
,
EKS Anywhere creates NetworkPolicy objects to enable communication between
its core components. EKS Anywhere will create NetworkPolicy resources in the following namespaces allowing all ingress/egress traffic by default:
- kube-system
- eksa-system
- All core Cluster API namespaces:
- capi-system
- capi-kubeadm-bootstrap-system
- capi-kubeadm-control-plane-system
- etcdadm-bootstrap-provider-system
- etcdadm-controller-system
- cert-manager
- Infrastructure provider’s namespace (for instance, capd-system OR capv-system)
- If Gitops is enabled, then the gitops namespace (flux-system by default)
This is the NetworkPolicy that will be created in these namespaces for the cluster:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-all-ingress-egress
namespace: test
spec:
podSelector: {}
ingress:
- {}
egress:
- {}
policyTypes:
- Ingress
- Egress
Switching the Cilium policy enforcement mode
The policy enforcement mode for Cilium can be changed as a part of cluster upgrade
through the cli upgrade command.
-
Switching to always
mode: When switching from default
/never
to always
mode,
EKS Anywhere will create the required NetworkPolicy objects for its core components (listed above).
This will ensure that the cluster gets upgraded successfully. But it is up to the user to create
the NetworkPolicy objects required for the user workloads.
-
Switching from always
mode: When switching from always
to default
mode, EKS Anywhere
will not delete any of the existing NetworkPolicy objects, including the ones required
for EKS Anywhere components (listed above). The user must delete NetworkPolicy objects as needed.
Node IPs configuration option
Starting with release v0.10, the node-cidr-mask-size
flag
for Kubernetes controller manager (kube-controller-manager) is configurable via the EKS anywhere cluster spec. The clusterNetwork.nodes
being an optional field,
is not generated in the EKS Anywhere spec using generate clusterconfig
command. This block for nodes
will need to be manually added to the cluster spec under the
clusterNetwork
section:
clusterNetwork:
pods:
cidrBlocks:
- 192.168.0.0/16
services:
cidrBlocks:
- 10.96.0.0/12
cniConfig:
cilium: {}
nodes:
cidrMaskSize: 24
If the user does not specify the clusterNetwork.nodes
field in the cluster yaml spec, the value for this flag defaults to 24 for IPv4.
Please note that this mask size needs to be greater than the pods CIDR mask size. In the above spec, the pod CIDR mask size is 16
and the node CIDR mask size is 24
. This ensures the cluster 256 blocks of /24 networks. For example, node1 will get
192.168.0.0/24, node2 will get 192.168.1.0/24, node3 will get 192.168.2.0/24 and so on.
To support more than 256 nodes, the cluster CIDR block needs to be large, and the node CIDR mask size needs to be
small, to support that many IPs.
For instance, to support 1024 nodes, a user can do any of the following things
- Set the pods cidr blocks to
192.168.0.0/16
and node cidr mask size to 26
- Set the pods cidr blocks to
192.168.0.0/15
and node cidr mask size to 25
Please note that the node-cidr-mask-size
needs to be large enough to accommodate the number of pods you want to run on each node.
A size of 24 will give enough IP addresses for about 250 pods per node, however a size of 26 will only give you about 60 IPs.
This is an immutable field, and the value can’t be updated once the cluster has been created.
5.1.6.3 - IAM for Pods configuration
EKS Anywhere cluster spec for Pod IAM (IRSA)
IAM Role for Service Account on EKS Anywhere clusters with self-hosted signing keys
IAM Roles for Service Account (IRSA) enables applications running in clusters to authenticate with AWS services using IAM roles. The current solution for leveraging this in EKS Anywhere involves creating your own OIDC provider for the cluster, and hosting your cluster’s public service account signing key. The public keys along with the OIDC discovery document should be hosted somewhere that AWS STS can discover it. The steps below assume the keys will be hosted on a publicly accessible S3 bucket. Refer this
doc to ensure that the s3 bucket is publicly accessible.
The steps below are based on the guide for configuring IRSA for DIY Kubernetes,
with modifications specific to EKS Anywhere’s cluster provisioning workflow. The main modification is the process of generating the keys.json document. As per the original guide, the user has to create the service account signing keys, and then use that to create the keys.json document prior to cluster creation. This order is reversed for EKS Anywhere clusters, so you will create the cluster first, and then retrieve the service account signing key generated by the cluster, and use it to create the keys.json document. The sections below show how to do this in detail.
Create an OIDC provider and make its discovery document publicly accessible
-
Create an s3 bucket to host the public signing keys and OIDC discovery document for your cluster as per this section.
Ensure you follow all the steps and save the $HOSTNAME
and $ISSUER_HOSTPATH
.
-
Create the OIDC discovery document as follows:
cat <<EOF > discovery.json
{
"issuer": "https://$ISSUER_HOSTPATH",
"jwks_uri": "https://$ISSUER_HOSTPATH/keys.json",
"authorization_endpoint": "urn:kubernetes:programmatic_authorization",
"response_types_supported": [
"id_token"
],
"subject_types_supported": [
"public"
],
"id_token_signing_alg_values_supported": [
"RS256"
],
"claims_supported": [
"sub",
"iss"
]
}
EOF
-
Upload it to the publicly accessible S3 bucket:
aws s3 cp --acl public-read ./discovery.json s3://$S3_BUCKET/.well-known/openid-configuration
-
Create an OIDC provider
for your cluster. Set the Provider URL
to https://$ISSUER_HOSTPATH
, and audience to sts.amazonaws.com
.
-
Note down the Provider
field of OIDC provider after it is created.
-
Assign an IAM role to this OIDC provider.
-
To do so from the AWS console, select and click on the OIDC provider, and click on Assign role at the top right.
-
Select Create a new role.
-
In the Select type of trusted entity section, choose Web identity.
-
In the Choose a web identity provider section:
- For Identity provider, choose the auto selected Identity Provider URL for your cluster.
- For Audience, choose sts.amazonaws.com.
-
Choose Next: Permissions.
-
In the Attach Policy section, select the IAM policy that has the permissions that you want your applications running in the pods to use.
-
Continue with the next sections of adding tags if desired and a suitable name for this role and create the role.
-
Below is a sample trust policy of IAM role for your pods. Remember to replace Account ID
and ISSUER_HOSTPATH
with required values.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::111122223333:oidc-provider/ISSUER_HOSTPATH"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"__doc_comment": "scope the role to the service account (optional)",
"StringEquals": {
"ISSUER_HOSTPATH:sub": "system:serviceaccount:default:my-serviceaccount"
},
"__doc_comment": "OR scope the role to a namespace (optional)",
"StringLike": {
"ISSUER_HOSTPATH/CLUSTER_ID:sub": ["system:serviceaccount:default:*","system:serviceaccount:observability:*"]
}
}
}
]
}
-
After the role is created, note down the name of this IAM Role as OIDC_IAM_ROLE
.
-
Once the cluster is created, you can create service accounts and grant them this role by editing the trust relationship of this role. You can use StringLike
condition to add required service accounts, as mentioned in the above sample. Please also refer to section Configure the trust relationship for the OIDC provider’s IAM Role
.
Create the EKS Anywhere cluster
- When creating the EKS Anywhere cluster, you need to configure the kube-apiserver’s
service-account-issuer
flag so it can issue and mount projected service account tokens in pods. For this, use the value obtained in the first section for $ISSUER_HOSTPATH
as the service-account-issuer
. Configure the kube-apiserver by setting this value through the EKS Anywhere cluster spec as follows:
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
name: my-cluster-name
spec:
podIamConfig:
serviceAccountIssuer: https://$ISSUER_HOSTPATH
Set the remaining fields in cluster spec
as required and create the cluster using the eksctl anywhere create cluster
command.
Generate keys.json and make it publicly accessible
-
The cluster provisioning workflow generates a pair of service account signing keys. Retrieve the public signing key generated and used by the cluster, and create a keys.json document containing the public signing key.
git clone https://github.com/aws/amazon-eks-pod-identity-webhook
cd amazon-eks-pod-identity-webhook
kubectl get secret ${CLUSTER_NAME}-sa -n eksa-system -o jsonpath={.data.tls\\.crt} | base64 --decode > ${CLUSTER_NAME}-sa.pub
go run ./hack/self-hosted/main.go -key ${CLUSTER_NAME}-sa.pub | jq '.keys += [.keys[0]] | .keys[1].kid = ""' > keys.json
-
Upload the keys.json document to the s3 bucket.
aws s3 cp --acl public-read ./keys.json s3://$S3_BUCKET/keys.json
Deploy pod identity webhook
-
After hosting the service account public signing key and OIDC discovery documents, the applications running in pods can start accessing the desired AWS resources, as long as the pod is mounted with the right service account tokens. This part of configuring the pods with the right service account tokens and env vars is automated by the amazon pod identity webhook
. Once the webhook is deployed, it mutates any pods launched using service accounts annotated with eks.amazonaws.com/role-arn
-
Clone amazon-eks-pod-identity-webhook
if not done already.
-
Set the $KUBECONFIG env var to the path of the EKS Anywhere cluster.
-
Update amazon-eks-pod-identity-webhook/deploy/auth.yaml
with OIDC_IAM_ROLE
and other annotations as mentioned in sample below.
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-serviceaccount
namespace: default
annotations:
# set this with value of OIDC_IAM_ROLE
eks.amazonaws.com/role-arn: "arn:aws:iam::111122223333:role/s3-reader"
# optional: Defaults to "sts.amazonaws.com" if not set
eks.amazonaws.com/audience: "sts.amazonaws.com"
# optional: When set to "true", adds AWS_STS_REGIONAL_ENDPOINTS env var
# to containers
eks.amazonaws.com/sts-regional-endpoints: "true"
# optional: Defaults to 86400 for expirationSeconds if not set
# Note: This value can be overwritten if specified in the pod
# annotation as shown in the next step.
eks.amazonaws.com/token-expiration: "86400"
-
Run the following command:
make cluster-up IMAGE=amazon/amazon-eks-pod-identity-webhook:latest
-
You can validate IRSA by using test steps mentioned here
. Ensure awscli pod is deployed in same namespace of ServiceAccount pod-identity-webhook.
In order to grant certain service accounts access to the desired AWS resources, edit the trust relationship for the OIDC provider’s IAM Role (OIDC_IAM_ROLE
) created in the first section, and add in the desired service accounts.
-
Choose the role in the console to open it for editing.
-
Choose the Trust relationships tab, and then choose Edit trust relationship.
-
Find the line that looks similar to the following:
"$ISSUER_HOSTPATH:aud": "sts.amazonaws.com"
-
Change the line to look like the following line. Replace aud
with sub
and replace KUBERNETES_SERVICE_ACCOUNT_NAMESPACE
and KUBERNETES_SERVICE_ACCOUNT_NAME
with the name of your Kubernetes service account and the Kubernetes namespace that the account exists in.
"$ISSUER_HOSTPATH:sub": "system:serviceaccount:KUBERNETES_SERVICE_ACCOUNT_NAMESPACE:KUBERNETES_SERVICE_ACCOUNT_NAME"
-
Refer this
doc for different ways of configuring one or multiple service accounts through the condition operators in the trust relationship.
-
Choose Update Trust Policy to finish.
5.1.6.4 - etcd configuration
EKS Anywhere cluster yaml etcd specification reference
Unstacked etcd topology (recommended)
There are two types of etcd topologies for configuring a Kubernetes cluster:
- Stacked: The etcd members and control plane components are colocated (run on the same node/machines)
- Unstacked/External: With the unstacked or external etcd topology, etcd members have dedicated machines and are not colocated with control plane components
The unstacked etcd topology is recommended for a HA cluster for the following reasons:
- External etcd topology decouples the control plane components and etcd member.
So if a control plane-only node fails, or if there is a memory leak in a component like kube-apiserver, it won’t directly impact an etcd member.
- Etcd is resource intensive, so it is safer to have dedicated nodes for etcd, since it could use more disk space or higher bandwidth.
Having a separate etcd cluster for these reasons could ensure a more resilient HA setup.
EKS Anywhere supports both topologies.
In order to configure a cluster with the unstacked/external etcd topology, you need to configure your cluster by updating the configuration file before creating the cluster.
This is a generic template with detailed descriptions below for reference:
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
name: my-cluster-name
spec:
clusterNetwork:
pods:
cidrBlocks:
- 192.168.0.0/16
services:
cidrBlocks:
- 10.96.0.0/12
cniConfig:
cilium: {}
controlPlaneConfiguration:
count: 1
endpoint:
host: ""
machineGroupRef:
kind: VSphereMachineConfig
name: my-cluster-name-cp
datacenterRef:
kind: VSphereDatacenterConfig
name: my-cluster-name
# etcd configuration
externalEtcdConfiguration:
count: 3
machineGroupRef:
kind: VSphereMachineConfig
name: my-cluster-name-etcd
kubernetesVersion: "1.19"
workerNodeGroupConfigurations:
- count: 1
machineGroupRef:
kind: VSphereMachineConfig
name: my-cluster-name
name: md-0
externalEtcdConfiguration (under Cluster)
This field accepts any configuration parameters for running external etcd.
count (required)
This determines the number of etcd members in the cluster.
The recommended number is 3.
machineGroupRef (required)
5.1.6.5 - AWS IAM Authenticator configuration
EKS Anywhere cluster yaml specification AWS IAM Authenticator reference
AWS IAM Authenticator support (optional)
EKS Anywhere can create clusters that support AWS IAM Authenticator-based api server authentication.
In order to add IAM Authenticator support, you need to configure your cluster by updating the configuration file before creating the cluster.
This is a generic template with detailed descriptions below for reference:
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
name: my-cluster-name
spec:
...
# IAM Authenticator support
identityProviderRefs:
- kind: AWSIamConfig
name: aws-iam-auth-config
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: AWSIamConfig
metadata:
name: aws-iam-auth-config
spec:
awsRegion: ""
backendMode:
- ""
mapRoles:
- roleARN: arn:aws:iam::XXXXXXXXXXXX:role/myRole
username: myKubernetesUsername
groups:
- ""
mapUsers:
- userARN: arn:aws:iam::XXXXXXXXXXXX:user/myUser
username: myKubernetesUsername
groups:
- ""
partition: ""
identityProviderRefs (Under Cluster)
List of identity providers you want configured for the Cluster.
This would include a reference to the AWSIamConfig
object with the configuration below.
awsRegion (required)
- Description: awsRegion can be any region in the aws partition that the IAM roles exist in.
- Type: string
backendMode (required)
- Description: backendMode configures the IAM authenticator server’s backend mode (i.e. where to source mappings from). We support EKSConfigMap
and CRD
modes supported by AWS IAM Authenticator, for more details refer to backendMode
- Type: string
mapRoles, mapUsers (recommended for EKSConfigMap
backend)
partition
- Description: This field is used to set the aws partition that the IAM roles are present in. Default value is
aws
.
- Type: string
5.1.6.6 - OIDC configuration
EKS Anywhere cluster yaml specification OIDC reference
OIDC support (optional)
EKS Anywhere can create clusters that support api server OIDC authentication.
In order to add OIDC support, you need to configure your cluster by updating the configuration file to include the details below. The OIDC configuration can be added at cluster creation time, or introduced via a cluster upgrade in VMware and CloudStack.
This is a generic template with detailed descriptions below for reference:
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
name: my-cluster-name
spec:
...
# OIDC support
identityProviderRefs:
- kind: OIDCConfig
name: my-cluster-name
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: OIDCConfig
metadata:
name: my-cluster-name
spec:
clientId: ""
groupsClaim: ""
groupsPrefix: ""
issuerUrl: "https://x"
requiredClaims:
- claim: ""
value: ""
usernameClaim: ""
usernamePrefix: ""
identityProviderRefs (Under Cluster)
List of identity providers you want configured for the Cluster.
This would include a reference to the OIDCConfig
object with the configuration below.
clientId (required)
- Description: ClientId defines the client ID for the OpenID Connect client
- Type: string
groupsClaim (optional)
- Description: GroupsClaim defines the name of a custom OpenID Connect claim for specifying user groups
- Type: string
groupsPrefix (optional)
- Description: GroupsPrefix defines a string to be prefixed to all groups to prevent conflicts with other authentication strategies
- Type: string
issuerUrl (required)
- Description: IssuerUrl defines the URL of the OpenID issuer, only HTTPS scheme will be accepted
- Type: string
requiredClaims (optional)
List of RequiredClaim objects listed below.
Only one is supported at this time.
requiredClaims[0] (optional)
- Description: RequiredClaim defines a key=value pair that describes a required claim in the ID Token
- Type: object
usernameClaim (optional)
- Description: UsernameClaim defines the OpenID claim to use as the user name.
Note that claims other than the default (‘sub’) is not guaranteed to be unique and immutable
- Type: string
usernamePrefix (optional)
- Description: UsernamePrefix defines a string to be prefixed to all usernames.
If not provided, username claims other than ‘email’ are prefixed by the issuer URL to avoid clashes.
To skip any prefixing, provide the value ‘-’.
- Type: string
5.1.6.7 - GitOpsConfig configuration
Configuration reference for GitOps cluster management.
GitOps Support (Optional)
EKS Anywhere can create clusters that supports GitOps configuration management with Flux.
In order to add GitOps support, you need to configure your cluster by specifying the configuration file with gitOpsRef
field when creating or upgrading the cluster.
We currently support two types of configurations: FluxConfig
and GitOpsConfig
.
Flux Configuration
The flux configuration spec has three optional fields, regardless of the chosen git provider.
Flux Configuration Spec Details
systemNamespace (optional)
- Description: Namespace in which to install the gitops components in your cluster. Defaults to
flux-system
- Type: string
clusterConfigPath (optional)
- Description: The path relative to the root of the git repository where EKS Anywhere will store the cluster configuration files. Defaults to the cluster name
- Type: string
branch (optional)
- Description: The branch to use when committing the configuration. Defaults to
main
- Type: string
EKS Anywhere currently supports two git providers for FluxConfig: Github and Git.
Github provider
Please note that for the Flux config to work successfully with the Github provider, the environment variable EKSA_GITHUB_TOKEN
needs to be set with a valid GitHub PAT
.
This is a generic template with detailed descriptions below for reference:
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
name: my-cluster-name
namespace: default
spec:
...
#GitOps Support
gitOpsRef:
name: my-github-flux-provider
kind: FluxConfig
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: FluxConfig
metadata:
name: my-github-flux-provider
namespace: default
spec:
systemNamespace: "my-alternative-flux-system-namespace"
clusterConfigPath: "path-to-my-clusters-config"
branch: "main"
github:
personal: true
repository: myClusterGitopsRepo
owner: myGithubUsername
---
github Configuration Spec Details
repository (required)
- Description: The name of the repository where EKS Anywhere will store your cluster configuration, and sync it to the cluster. If the repository exists, we will clone it from the git provider; if it does not exist, we will create it for you.
- Type: string
owner (required)
- Description: The owner of the Github repository; either a Github username or Github organization name. The Personal Access Token used must belong to the owner if this is a personal repository, or have permissions over the organization if this is not a personal repository.
- Type: string
personal (optional)
- Description: Is the repository a personal or organization repository?
If personal, this value is
true
; otherwise, false
.
If using an organizational repository (e.g. personal
is false
) the owner
field will be used as the organization
when authenticating to github.com
- Default: true
- Type: boolean
Git provider
Before you create a cluster using the Git provider, you will need to set and export the EKSA_GIT_KNOWN_HOSTS
and EKSA_GIT_PRIVATE_KEY
environment variables.
EKSA_GIT_KNOWN_HOSTS
EKS Anywhere uses the provided known hosts file to verify the identity of the git provider when connecting to it with SSH.
The EKSA_GIT_KNOWN_HOSTS
environment variable should be a path to a known hosts file containing entries for the git server to which you’ll be connecting.
For example, if you wanted to provide a known hosts file which allows you to connect to and verify the identity of github.com using a private key based on the key algorithm ecdsa, you can use the OpenSSH utility ssh-keyscan
to obtain the known host entry used by github.com for the ecdsa
key type.
EKS Anywhere supports ecdsa
, rsa
, and ed25519
key types, which can be specified via the sshKeyAlgorithm
field of the git provider config.
ssh-keyscan -t ecdsa github.com >> my_eksa_known_hosts
This will produce a file which contains known-hosts entries for the ecdsa
key type supported by github.com, mapping the host to the key-type and public key.
github.com ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEmKSENjQEezOmxkZMy7opKgwFB9nkt5YRrYMjNuG5N87uRgg6CLrbo5wAdT/y6v0mKV0U2w0WZ2YB/++Tpockg=
EKS Anywhere will use the content of the file at the path EKSA_GIT_KNOWN_HOSTS
to verify the identity of the remote git server, and the provided known hosts file must contain an entry for the remote host and key type.
EKSA_GIT_PRIVATE_KEY
The EKSA_GIT_PRIVATE_KEY
environment variable should be a path to the private key file associated with a valid SSH public key registered with your Git provider.
This key must have permission to both read from and write to your repository.
The key can use the key algorithms rsa
, ecdsa
, and ed25519
.
This key file must have restricted file permissions, allowing only the owner to read and write, such as octal permissions 600
.
If your private key file is passphrase protected, you must also set EKSA_GIT_SSH_KEY_PASSPHRASE
with that value.
This is a generic template with detailed descriptions below for reference:
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
name: my-cluster-name
namespace: default
spec:
...
#GitOps Support
gitOpsRef:
name: my-git-flux-provider
kind: FluxConfig
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: FluxConfig
metadata:
name: my-git-flux-provider
namespace: default
spec:
systemNamespace: "my-alternative-flux-system-namespace"
clusterConfigPath: "path-to-my-clusters-config"
branch: "main"
git:
repositoryUrl: ssh://git@github.com/myAccount/myClusterGitopsRepo.git
sshKeyAlgorithm: ecdsa
---
git Configuration Spec Details
repositoryUrl (required)
NOTE: The repositoryUrl
value for private SSH repositories is of the format ssh://git@provider.com/$REPO_OWNER/$REPO_NAME.git
. This may differ from the default SSH URL given by your provider. For example, the github.com user interface provides an SSH URL containing a :
before the repository owner, rather than a /
. Make sure to replace this :
with a /
, if present.
- Description: The URL of an existing repository where EKS Anywhere will store your cluster configuration and sync it to the cluster. For private repositories, the SSH URL will be of the format
ssh://git@provider.com/$REPO_OWNER/$REPO_NAME.git
- Type: string
sshKeyAlgorithm (optional)
- Description: The SSH key algorithm of the private key specified via
EKSA_PRIVATE_KEY_FILE
. Defaults to ecdsa
- Type: string
Supported SSH key algorithm types are ecdsa
, rsa
, and ed25519
.
Be sure that this SSH key algorithm matches the private key file provided by EKSA_GIT_PRIVATE_KEY_FILE
and that the known hosts entry for the key type is present in EKSA_GIT_KNOWN_HOSTS
.
GitOps Configuration
Warning
GitOps Config will be deprecated in v0.11.0 in lieu of using the Flux Config described above.
Please note that for the GitOps config to work successfully the environment variable EKSA_GITHUB_TOKEN
needs to be set with a valid GitHub PAT
. This is a generic template with detailed descriptions below for reference:
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
name: my-cluster-name
namespace: default
spec:
...
#GitOps Support
gitOpsRef:
name: my-gitops
kind: GitOpsConfig
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: GitOpsConfig
metadata:
name: my-gitops
namespace: default
spec:
flux:
github:
personal: true
repository: myClusterGitopsRepo
owner: myGithubUsername
fluxSystemNamespace: ""
clusterConfigPath: ""
GitOps Configuration Spec Details
flux (required)
- Description: our supported gitops provider is
flux
.
This is the only supported value.
- Type: object
Flux Configuration Spec Details
github (required)
- Description:
github
is the only currently supported git provider.
This defines your github configuration to be used by EKS Anywhere and flux.
- Type: object
github Configuration Spec Details
repository (required)
- Description: The name of the repository where EKS Anywhere will store your cluster configuration, and sync it to the cluster.
If the repository exists, we will clone it from the git provider; if it does not exist, we will create it for you.
- Type: string
owner (required)
- Description: The owner of the Github repository; either a Github username or Github organization name.
The Personal Access Token used must belong to the
owner
if this is a personal
repository, or have permissions over the organization if this is not a personal
repository.
- Type: string
personal (optional)
- Description: Is the repository a personal or organization repository?
If personal, this value is
true
; otherwise, false
.
If using an organizational repository (e.g. personal
is false
) the owner
field will be used as the organization
when authenticating to github.com
- Default:
true
- Type: boolean
clusterConfigPath (optional)
- Description: The path relative to the root of the git repository where EKS Anywhere will store the cluster configuration files.
- Default:
clusters/$MANAGEMENT_CLUSTER_NAME
- Type: string
fluxSystemNamespace (optional)
- Description: Namespace in which to install the gitops components in your cluster.
- Default:
flux-system
.
- Type: string
branch (optional)
- Description: The branch to use when committing the configuration.
- Default:
main
- Type: string
5.1.6.8 - Proxy configuration
EKS Anywhere cluster yaml specification proxy configuration reference
Proxy support (optional)
You can configure EKS Anywhere to use a proxy to connect to the Internet. This is the
generic template with proxy configuration for your reference:
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
name: my-cluster-name
spec:
...
proxyConfiguration:
httpProxy: http-proxy-ip:port
httpsProxy: https-proxy-ip:port
noProxy:
- list of no proxy endpoints
Configuring Docker daemon
EKS Anywhere will proxy for you given the above configuration file.
However, to successfully use EKS Anywhere you will also need to ensure your Docker daemon is configured to use the proxy.
This generally means updating your daemon to launch with the HTTPS_PROXY, HTTP_PROXY, and NO_PROXY environment variables.
For an example of how to do this with systemd, please see Docker’s documentation here
.
Configuring EKS Anywhere proxy without config file
For commands using a cluster config file, EKS Anywhere will derive its proxy config from the cluster configuration file.
However, for commands that do not utilize a cluster config file, you can set the following environment variables:
export HTTPS_PROXY=https-proxy-ip:port
export HTTP_PROXY=http-proxy-ip:port
export NO_PROXY=no-proxy-domain.com,another-domain.com,localhost
Proxy Configuration Spec Details
proxyConfiguration (required)
- Description: top level key; required to use proxy.
- Type: object
httpProxy (required)
- Description: HTTP proxy to use to connect to the internet; must be in the format IP:port
- Type: string
- Example:
httpProxy: 192.168.0.1:3218
httpsProxy (required)
- Description: HTTPS proxy to use to connect to the internet; must be in the format IP:port
- Type: string
- Example:
httpsProxy: 192.168.0.1:3218
noProxy (optional)
- Description: list of endpoints that should not be routed through the proxy; can be an IP, CIDR block, or a domain name
- Type: list of strings
- Example
noProxy:
- localhost
- 192.168.0.1
- 192.168.0.0/16
- .example.com
5.1.6.9 - Registry Mirror configuration
EKS Anywhere cluster yaml specification for registry mirror configuration
Registry Mirror Support (optional)
You can configure EKS Anywhere to use a private registry as a mirror for pulling the required images.
The following cluster spec shows an example of how to configure registry mirror:
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
name: my-cluster-name
spec:
...
registryMirrorConfiguration:
endpoint: <private registry IP or hostname>
port: <private registry port>
caCertContent: |
-----BEGIN CERTIFICATE-----
MIIF1DCCA...
...
es6RXmsCj...
-----END CERTIFICATE-----
Registry Mirror Configuration Spec Details
registryMirrorConfiguration (required)
- Description: top level key; required to use a private registry.
- Type: object
endpoint (required)
- Description: IP address or hostname of the private registry for pulling images
- Type: string
- Example:
endpoint: 192.168.0.1
port (optional)
- Description: Port for the private registry. This is an optional field. If a port
is not specified, the default HTTPS port
443
is used
- Type: string
- Example:
port: 443
caCertContent (optional)
-
Description: Certificate Authority (CA) Certificate for the private registry . When using
self-signed certificates it is necessary to pass this parameter in the cluster spec. This must be the individual public CA cert used to sign the registry certificate. This will be added to the cluster nodes so that they are able to pull images from the private registry.
It is also possible to configure CACertContent by exporting an environment variable:
export EKSA_REGISTRY_MIRROR_CA="/path/to/certificate-file"
-
Type: string
-
Example:
CACertContent: |
-----BEGIN CERTIFICATE-----
MIIF1DCCA...
...
es6RXmsCj...
-----END CERTIFICATE-----
authenticate (optional)
- Description: Optional field to authenticate with a private registry. When using private registries that
require authentication, it is necessary to set this parameter to
true
in the cluster spec.
- Type: boolean
- Example:
authenticate: true
To use an authenticated private registry, please also set the following environment variables:
export REGISTRY_USERNAME=<username>
export REGISTRY_PASSWORD=<password>
Import images into a private registry
You can use the download images
and import images
commands to pull images from public.ecr.aws
and push them to your
private registry.
The copy packages
must be used if you want to copy EKS Anywhere Curated Packages to your registry mirror.
The download images
command also pulls the cilium chart from public.ecr.aws
and pushes it to the registry mirror. It requires the registry credentials for performing a login. Set the following environment variables for the login:
export REGISTRY_ENDPOINT=<registry_endpoint>
export REGISTRY_USERNAME=<username>
export REGISTRY_PASSWORD=<password>
Download the EKS Anywhere artifacts to get the EKS Anywhere bundle:
eksctl anywhere download artifacts
tar -xzf eks-anywhere-downloads.tar.gz
Download and import EKS Anywhere images:
eksctl anywhere download images -o eks-anywhere-images.tar
docker login https://${REGISTRY_ENDPOINT}
...
eksctl anywhere import images -i eks-anywhere-images.tar --bundles eks-anywhere-downloads/bundle-release.yaml --registry ${REGISTRY_ENDPOINT}
Use the EKS Anywhere bundle to copy packages:
eksctl anywhere copy packages --bundle ./eks-anywhere-downloads/bundle-release.yaml --dst-cert rootCA.pem ${REGISTRY_ENDPOINT}
Docker configurations
It is necessary to add the private registry’s CA Certificate
to the list of CA certificates on the admin machine if your registry uses self-signed certificates.
For Linux
, you can place your certificate here: /etc/docker/certs.d/<private-registry-endpoint>/ca.crt
For Mac
, you can follow this guide to add the certificate to your keychain: https://docs.docker.com/desktop/mac/#add-tls-certificates
Note
You may need to restart Docker after adding the certificates.
Registry configurations
Depending on what registry you decide to use, you will need to create the following projects:
bottlerocket
eks-anywhere
eks-distro
isovalent
cilium-chart
For example, if a registry is available at private-registry.local
, then the following
projects will have to be created:
https://private-registry.local/bottlerocket
https://private-registry.local/eks-anywhere
https://private-registry.local/eks-distro
https://private-registry.local/isovalent
https://private-registry.local/cilium-chart
5.2 - Bare Metal
Preparing a Bare Metal provider for EKS Anywhere
5.2.1 - Requirements for EKS Anywhere on Bare Metal
Bare Metal provider requirements for EKS Anywhere
To run EKS Anywhere on Bare Metal, you need to meet the hardware and networking requirements described below.
Administrative machine
Set up an Administrative machine as described in Install EKS Anywhere
.
Compute server requirements
The minimum number of physical machines needed to run EKS Anywhere on bare metal is 1. To configure EKS Anywhere to run on a single server, set controlPlaneConfiguration.count
to 1, and omit workerNodeGroupConfigurations
from your cluster configuration.
The recommended number of physical machines for production is at least:
- Control plane physical machines: 3
- Worker physical machines: 2
The compute hardware you need for your Bare Metal cluster must meet the following capacity requirements:
- vCPU: 2
- Memory: 8GB RAM
- Storage: 25GB
Upgrade requirements
If you are running a standalone cluster with only one control plane node, you will need at least one additional, temporary machine for each control plane node grouping. For cluster with multiple control plane nodes, you can perform a rolling upgrade with or without an extra temporary machine. For worker node upgrades, you can perform a rolling upgrade with or without an extra temporary machine.
When upgrading without an extra machine, keep in mind that your control plane and your workload must be able to tolerate node unavailability. When upgrading with extra machine(s), you will need additional temporary machine(s) for each control plane and worker node grouping. Refer to Upgrade Bare Metal Cluster
and Advanced configuration for rolling upgrade
.
NOTE: For single node clusters that require an additional temporary machine for upgrading, if you don’t want to set up the extra hardware, you may recreate the cluster for upgrading and handle data recovery manually.
Network requirements
Each machine should include the following features:
- Network Interface Cards: At least one NIC is required. It must be capable of network booting.
- BMC integration (recommended): An IPMI or Redfish implementation (such a Dell iDRAC, RedFish-compatible, legacy or HP iLO) on the computer’s motherboard or on a separate expansion card. This feature is used to allow remote management of the machine, such as turning the machine on and off.
NOTE: BMC integration is not required for an EKS Anywhere cluster. However, without BMC integration, upgrades are not supported and you will have to physically turn machines off and on when appropriate.
Here are other network requirements:
-
All EKS Anywhere machines, including the Admin, control plane and worker machines, must be on the same layer 2 network and have network connectivity to the BMC (IPMI, Redfish, and so on).
-
You must be able to run DHCP on the control plane/worker machine network.
NOTE:: If you have another DHCP service running on the network, you need to prevent it from interfering with the EKS Anywhere DHCP service. You can do that by configuring the other DHCP service to explicitly block all MAC addresses and exclude all IP addresses that you plan to use with your EKS Anywhere clusters.
-
The administrative machine and the target workload environment will need network access to:
- public.ecr.aws
- anywhere-assets.eks.amazonaws.com: To download the EKS Anywhere binaries, manifests and OVAs
- distro.eks.amazonaws.com: To download EKS Distro binaries and manifests
- d2glxqk2uabbnd.cloudfront.net: For EKS Anywhere and EKS Distro ECR container images
-
Two IP addresses routable from the cluster, but excluded from DHCP offering. One IP address is to be used as the Control Plane Endpoint IP. The other is for the Tinkerbell IP address on the target cluster. Below are some suggestions to ensure that these IP addresses are never handed out by your DHCP server. You may need to contact your network engineer to manage these addresses.
- Pick IP addresses reachable from the cluster subnet that are excluded from the DHCP range or
- Create an IP reservation for these addresses on your DHCP server. This is usually accomplished by adding a dummy mapping of this IP address to a non-existent mac address.
NOTE: When you set up your cluster configuration YAML file, the endpoint and Tinkerbell addresses are set in the ControlPlaneConfiguration.endpoint.host
and tinkerbellIP
fields, respectively.
- Ports must be open to the Admin machine and cluster machines as described in Ports and protocols
.
Validated hardware
Through extensive testing in a variety of on premises customer environments during our beta phase, we expect Amazon EKS Anywhere on bare metal to run on most generic hardware that meets the above requirements.
In addition, we have collaborated with our hardware original equipment manufacturer (OEM) partners to provide you a list of validated hardware:
Bare metal servers |
BMC |
NIC |
OS |
Dell PowerEdge R740 |
iDRAC9 |
Mellanox ConnectX-4 LX 25GbE |
Validated with Ubuntu v20.04.1 |
Dell PowerEdge R7525 (NVIDIA Tesla™ T4 GPU’s) |
iDRAC9 |
Mellanox ConnectX-4 LX 25GbE & Intel Ethernet 10G 4P X710 OCP |
Validated with Ubuntu v20.04.1 |
Dell PowerFlex (R640) |
iDRAC9 |
Mellanox ConnectX-4 LX 25GbE |
Validated with Ubuntu v20.04.1 |
SuperServer SYS-510P-M |
IPMI2.0/Redfish API |
Intel® Ethernet Controller i350 2x 1GbE |
Validated with Ubuntu v20.04.1 and Bottlerocket v1.8.0 |
Dell PowerEdge R240 |
iDRAC9 |
Broadcom 57414 Dual Port 10/25GbE |
Validated with Ubuntu v20.04 and Bottlerocket v1.8.0 |
HPE ProLiant DL20 |
iLO5 |
HPE 361i 1G |
Validated with Ubuntu v20.04 and Bottlerocket v1.8.0 |
HPE ProLiant DL160 Gen10 |
iLO5 |
HPE Eth 10/25Gb 2P 640SFP28 A |
Validated with Ubuntu v20.04.1 |
Dell PowerEdge R340 |
iDRAC9 |
Broadcom 57416 Dual Port 10GbE |
Validated with Ubuntu v20.04.1 and Bottlerocket v1.8.0 |
HPE ProLiant DL360 |
iLO5 |
HPE Ethernet 1Gb 4-port 331i |
Validated with Ubuntu v20.04.1 |
Lenovo ThinkSystem SR650 V2 |
XClarity Controller Enterprise v7.92 |
- Intel I350 1GbE RJ45 4-port OCP
- Marvell QL41232 10/25GbE SFP28
2-Port PCIe Ethernet Adapter
|
Validated with Ubuntu v20.04.1 |
5.2.2 - Preparing Bare Metal for EKS Anywhere
Set up a Bare Metal cluster to prepare it for EKS Anywhere
After gathering hardware described in Bare Metal Requirements
, you need to prepare the hardware and create a CSV file describing that hardware.
Prepare hardware
To prepare your computer hardware for EKS Anywhere, you need to connect your computer hardware and do some configuration.
Once the hardware is in place, you need to:
- Obtain IP and MAC addresses for your machines' NICs.
- Obtain IP addresses for your machines' BMC interfaces.
- Obtain the gateway address for your network to reach the Internet.
- Obtain the IP address for your DNS servers.
- Make sure the following settings are in place:
- UEFI is enabled on all target cluster machines, unless you are provisioning RHEL systems. Enable legacy BIOS on any RHEL machines.
- Netboot (PXE or HTTP) boot is enabled for the NIC on each machine for which you provided the MAC address. This is the interface on which the operating system will be provisioned.
- IPMI over LAN and/or Redfish is enabled on all BMC interfaces.
- Go to the BMC settings for each machine and set the IP address (bmc_ip), username (bmc_username), and password (bmc_password) to use later in the CSV file.
Prepare hardware inventory
Create a CSV file to provide information about all physical machines that you are ready to add to your target Bare Metal cluster.
This file will be used:
- When you generate the hardware file to be included in the cluster creation process described in the Create Bare Metal production cluster
Getting Started guide.
- To provide information that is passed to each machine from the Tinkerbell DHCP server when the machine is initially network booted.
The following is an example of an EKS Anywhere Bare Metal hardware CSV file:
hostname,bmc_ip,bmc_username,bmc_password,mac,ip_address,netmask,gateway,nameservers,labels,disk
eksa-cp01,10.10.44.1,root,PrZ8W93i,CC:48:3A:00:00:01,10.10.50.2,255.255.254.0,10.10.50.1,8.8.8.8|8.8.4.4,type=cp,/dev/sda
eksa-cp02,10.10.44.2,root,Me9xQf93,CC:48:3A:00:00:02,10.10.50.3,255.255.254.0,10.10.50.1,8.8.8.8|8.8.4.4,type=cp,/dev/sda
eksa-cp03,10.10.44.3,root,Z8x2M6hl,CC:48:3A:00:00:03,10.10.50.4,255.255.254.0,10.10.50.1,8.8.8.8|8.8.4.4,type=cp,/dev/sda
eksa-wk01,10.10.44.4,root,B398xRTp,CC:48:3A:00:00:04,10.10.50.5,255.255.254.0,10.10.50.1,8.8.8.8|8.8.4.4,type=worker,/dev/sda
eksa-wk02,10.10.44.5,root,w7EenR94,CC:48:3A:00:00:05,10.10.50.6,255.255.254.0,10.10.50.1,8.8.8.8|8.8.4.4,type=worker,/dev/sda
The CSV file is a comma-separated list of values in a plain text file, holding information about the physical machines in the datacenter that are intended to be a part of the cluster creation process.
Each line represents a physical machine (not a virtual machine).
The following sections describe each value.
hostname
The hostname assigned to the machine.
bmc_ip (optional)
The IP address assigned to the BMC interface on the machine.
bmc_username (optional)
The username assigned to the BMC interface on the machine.
bmc_password (optional)
The password associated with the bmc_username
assigned to the BMC interface on the machine.
mac
The MAC address of the network interface card (NIC) that provides access to the host computer.
ip_address
The IP address providing access to the host computer.
netmask
The netmask associated with the ip_address
value.
In the example above, a /23 subnet mask is used, allowing you to use up to 510 IP addresses in that range.
gateway
IP address of the interface that provides access (the gateway) to the Internet.
nameservers
The IP address of the server that you want to provide DNS service to the cluster.
labels
The optional labels field can consist of a key/value pair to use in conjunction with the hardwareSelector
field when you set up your Bare Metal configuration
.
The key/value pair is connected with an equal (=
) sign.
For example, a TinkerbellMachineConfig
with a hardwareSelector
containing type: cp
will match entries in the CSV containing type=cp
in its label definition.
disk
The device name of the disk on which the operating system will be installed.
For example, it could be /dev/sda
for the first SCSI disk or /dev/nvme0n1
for the first NVME storage device.
5.2.3 - Netbooting and Tinkerbell for Bare Metal
Overview of Tinkerbell and network booting for EKS Anywhere on Bare Metal
EKS Anywhere uses Tinkerbell
to provision machines for a Bare Metal cluster.
Understanding what Tinkerbell is and how it works with EKS Anywhere can help you take advantage of advanced provisioning features or overcome provisioning problems you encounter.
As someone deploying an EKS Anywhere cluster on Bare Metal, you have several opportunities to interact with Tinkerbell:
- Create a hardware CSV file: You are required to create a hardware CSV file
that contains an entry for every physical machine you want to add at cluster creation time.
- Create an EKS Anywhere cluster: By modifying the Bare Metal configuration file
used to create a cluster, you can change some Tinkerbell settings or add actions to define how the operating system on each machine is configured.
- Monitor provisioning: You can follow along with the Tinkerbell Overview
in this page to monitor the progress of your hardware provisioning, as Tinkerbell finds machines and attempts to network boot, configure, and restart them.
Using Tinkerbell on EKS Anywhere
The sections below step through how Tinkerbell is integrated with EKS Anywhere to deploy a Bare Metal cluster.
While based on features described in Tinkerbell Documentation
,
EKS Anywhere has modified and added to Tinkerbell components such that the entire Tinkerbell stack is now Kubernetes-friendly and can run on a Kubernetes cluster.
The information that Tinkerbell uses to provision machines for the target EKS Anywhere cluster needs to be gathered in a CSV file with the following format:
hostname,bmc_ip,bmc_username,bmc_password,mac,ip_address,netmask,gateway,nameservers,labels,disk
eksa-cp01,10.10.44.1,root,PrZ8W93i,CC:48:3A:00:00:01,10.10.50.2,255.255.254.0,10.10.50.1,8.8.8.8,type=cp,/dev/sda
...
Each physical, bare metal machine is represented by a comma-separated list of information on a single line.
It includes information needed to identify each machine (the NIC’s MAC address), network boot the machine, point to the disk to install on, and then configure and start the installed system.
See Preparing hardware inventory
for details on the content and format of that file.
Modify the cluster specification file
Before you create a cluster using the Bare Metal configuration
file, you can make Tinkerbell-related changes to that file.
In particular, TinkerbellDatacenterConfig fields
, TinkerbellMachineConfig fields
, and Tinkerbell Actions
can be added or modified.
Tinkerbell actions vary based on the operating system you choose for your EKS Anywhere cluster.
Actions are stored internally and not shown in the generated cluster specification file, so you must add those sections yourself to change from the defaults (see Ubuntu TinkerbellTemplateConfig example
and Bottlerocket TinkerbellTemplateConfig example
for details).
In most cases, you don’t need to touch the default actions.
However, you might want to modify an action (for example to change kexec
to a reboot
action if the hardware requires it) or add an action to further configure the installed system.
Examples in Advanced Bare Metal cluster configuration
show a few actions you might want to add.
Once you have made all your modifications, you can go ahead and create the cluster.
The next section describes how Tinkerbell works during cluster creation to provision your Bare Metal machines and prepare them to join the EKS Anywhere cluster.
Overview of Tinkerbell in EKS Anywhere
When you run the command to create an EKS Anywhere Bare Metal cluster, a set of Tinkerbell components start up on the Admin machine.
One of these components runs in a container on Docker, while other components run as either controllers or services in pods on the Kubernetes kind
cluster that is started up on the Admin machine.
Tinkerbell components include Boots, Hegel, Rufio, and Tink.
Tinkerbell Boots service
The Boots service runs in a single container to handle the DHCP service and network booting activities.
In particular, Boots hands out IP addresses, serves iPXE binaries via HTTP and TFTP, delivers an iPXE script to the provisioned machines, and runs a syslog server.
Boots is different from the other Tinkerbell services because the DHCP service it runs must listen directly to layer 2 traffic.
(The kind cluster running on the Admin machine doesn’t have the ability to have pods listening on layer 2 networks, which is why Boots is run directly on Docker instead, with host networking enabled.)
Because Boots is running as a container in Docker, you can see the output in the logs for the Boots container by running:
From the logs output, you will see iPXE try to network boot each machine.
If the process doesn’t get all the information it wants from the DHCP server, it will time out.
You can see iPXE loading variables, loading a kernel and initramfs (via DHCP), then booting into that kernel and initramfs: in other words, you will see everything that happens with iPXE before it switches over to the kernel and initramfs.
The kernel, initramfs, and all images retrieved later are obtained remotely over HTTP and HTTPS.
Tinkerbell Hegel, Rufio, and Tink components
After Boots comes up on Docker, a small Kubernetes kind cluster starts up on the Admin machine.
Other Tinkerbell components run as pods on that kind cluster. Those components include:
- Hegel: Manages Tinkerbell’s metadata service.
The Hegel service gets its metadata from the hardware specification stored in Kubernetes in the form of custom resources.
The format that it serves is similar to an Ec2 metadata format.
- Rufio: Handles talking to BMCs (which manages things like starting and stopping systems with IPMI or Redfish).
The Rufio Kubernetes controller sets things such as power state, persistent boot order.
BMC authentication is managed with Kubernetes secrets.
- Tink: The Tink service consists of three components: Tink server, Tink controller, and Tink worker.
The Tink controller manages hardware data, templates you want to execute, and the workflows that each target specific hardware you are provisioning.
The Tink worker is a small binary that runs inside of HookOS and talks to the Tink server.
The worker sends the Tink server its MAC address and asks the server for workflows to run.
The Tink worker will then go through each action, one-by-one, and try to execute it.
To see those services and controllers running on the kind bootstrap cluster, type:
kubectl get pods -n eksa-system
NAME READY STATUS RESTARTS AGE
hegel-sbchp 1/1 Running 0 3d
rufio-controller-manager-5dcc568c79-9kllz 1/1 Running 0 3d
tink-controller-manager-54dc786db6-tm2c5 1/1 Running 0 3d
tink-server-5c494445bc-986sl 1/1 Running 0 3d
Provisioning hardware with Tinkerbell
After you start up the cluster create process, the following is the general workflow that Tinkerbell performs to begin provisioning the bare metal machines and prepare them to become part of the EKS Anywhere target cluster.
You can set up kubectl on the Admin machine to access the bootstrap cluster and follow along:
export KUBECONFIG=${PWD}/${CLUSTER_NAME}/generated/${CLUSTER_NAME}.kind.kubeconfig
Power up the nodes
Tinkerbell starts by finding a node from the hardware list (based on MAC address) and contacting it to identify a baseboard management job (job.bmc
) that runs a set of baseboard management tasks (task.bmc
).
To see that information, type:
NAMESPACE NAME AGE
eksa-system mycluster-md-0-1656099863422-vxvh2-provision 12m
NAMESPACE NAME AGE
eksa-system mycluster-md-0-1656099863422-vxh2-provision-task-0 55s
eksa-system mycluster-md-0-1656099863422-vxh2-provision-task-1 51s
eksa-system mycluster-md-0-1656099863422-vxh2-provision-task-2 47s
The following shows snippets from the tasks.bmc
output that represent the three tasks: Power Off, enable network boot, and Power On.
kubectl describe tasks.bmc -n eksa-system eksa-system mycluster-md-0-1656099863422-vxh2-provision-task-0
...
Task:
Power Action: Off
Status:
Completion Time: 2022-06-27T20:32:59Z
Conditions:
Status: True
Type: Completed
kubectl describe tasks.bmc -n eksa-system eksa-system mycluster-md-0-1656099863422-vxh2-provision-task-1
...
Task:
One Time Boot Device Action:
Device:
pxe
Efi Boot: true
Status:
Completion Time: 2022-06-27T20:33:04Z
Conditions:
Status: True
Type: Completed
kubectl describe tasks.bmc -n eksa-system eksa-system mycluster-md-0-1656099863422-vxh2-provision-task-2
Task:
Power Action: on
Status:
Completion Time: 2022-06-27T20:33:10Z
Conditions:
Status: True
Type: Completed
Rufio converts the baseboard management jobs into task objects, then goes ahead and executes each task. To see Rufio logs, type:
kubectl logs -n eksa-system rufio-controller-manager-5dcc568c79-9kllz | less
Network booting the nodes
Next the Boots service netboots the machine and begins streaming the HookOS (vmlinuz
and initramfs
) to the machine.
HookOS runs in memory and provides the installation environment.
To watch the Boots log messages as each node powers up, type:
You can search the output for vmlinuz
and initramfs
to watch as the HookOS is downloaded and booted from memory on each machine.
Running workflows
Once the HookOS is up, Tinkerbell begins running the tasks and actions contained in the workflows.
This is coordinated between the Tink worker, running in memory within the HookOS on the machine, and the Tink server on the kind cluster.
To see the workflows being run, type the following:
kubectl get workflows.tinkerbell.org -n eksa-system
NAME TEMPLATE STATE
mycluster-md-0-1656099863422-vxh2 mycluster-md-0-1656099863422-vxh2 STATE_RUNNING
This shows the workflow for the first machine that is being provisioned.
Add -o yaml
to see details of that workflow template:
kubectl get workflows.tinkerbell.org -n eksa-system -o yaml
...
status:
state: STATE_RUNNING
tasks:
- actions
- environment:
COMPRESSED: "true"
DEST_DISK: /dev/sda
IMG_URL: https://anywhere-assets.eks.amazonaws.com/releases/bundles/11/artifacts/raw/1-22/bottlerocket-v1.22.10-eks-d-1-22-8-eks-a-11-amd64.img.gz
image: public.ecr.aws/eks-anywhere/tinkerbell/hub/image2disk:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
name: stream-image
seconds: 35
startedAt: "2022-06-27T20:37:39Z"
status: STATE_SUCCESS
...
You can see that the first action in the workflow is to stream (stream-image
) the operating system to the destination disk (DEST_DISK
) on the machine.
In this example, the Bottlerocket operating system that will be copied to disk (/dev/sda
) is being served from the location specified by IMG_URL.
The action was successful (STATE_SUCCESS) and it took 35 seconds.
Each action and its status is shown in this output for the whole workflow.
To see details of the default actions for each supported operating system, see the Ubuntu TinkerbellTemplateConfig example
and Bottlerocket TinkerbellTemplateConfig example
.
In general, the actions include:
- Streaming the operating system image to disk on each machine.
- Configuring the network interfaces on each machine.
- Setting up the cloud-init or similar service to add users and otherwise configure the system.
- Identifying the data source to add to the system.
- Setting the kernel to pivot to the installed system (using kexec) or having the system reboot to bring up the installed system from disk.
If all goes well, you will see all actions set to STATE_SUCCESS, except for the kexec-image action. That should show as STATE_RUNNING for as long as the machine is running.
You can review the CAPT logs to see provisioning activity.
For example, at the start of a new provisioning event, you would see something like the following:
kubectl logs -n capt-system capt-controller-manager-9f8b95b-frbq | less
..."Created BMCJob to get hardware ready for provisioning"...
You can follow this output to see the machine as it goes through the provisioning process.
After the node is initialized, completes all the Tinkerbell actions, and is booted into the installed operating system (Ubuntu or Bottlerocket), the new system starts cloud-init to do further configuration.
At this point, the system will reach out to the Tinkerbell Hegel service to get its metadata.
If something goes wrong, viewing Hegel files can help you understand why a stuck system that has booted into Ubuntu or Bottlerocket has not joined the cluster yet.
To see the Hegel logs, get the internal IP address for one of the new nodes. Then check for the names of Hegel logs and display the contents of one of those logs, searching for the IP address of the node:
kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP ...
eksa-da04 Ready control-plane,master 9m5s v1.22.10-eks-7dc61e8 10.80.30.23
kubectl get logs -n eksa-system | grep hegel
hegel-n7ngs
kubectl logs -n eksa-system hegel-n7ngs
..."Retrieved IP peer IP..."userIP":"10.80.30.23...
If the log shows you are getting requests from the node, the problem is not a cloud-init issue.
After the first machine successfully completes the workflow, each other machine repeats the same process until the initial set of machines is all up and running.
Tinkerbell moves to target cluster
Once the initial set of machines is up and the EKS Anywhere cluster is running, all the Tinkerbell services and components (including Boots) are moved to the new target cluster and run as pods on that cluster.
Those services are deleted on the kind cluster on the Admin machine.
Reviewing the status
At this point, you can change your kubectl credentials to point at the new target cluster to get information about Tinkerbell services on the new cluster. For example:
export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
First check that the Tinkerbell pods are all running by listing pods from the eksa-system namespace:
kubectl get pods -n eksa-system
NAME READY STATUS RESTARTS AGE
boots-5dc66b5d4-klhmj 1/1 Running 0 3d
hegel-sbchp 1/1 Running 0 3d
rufio-controller-manager-5dcc568c79-9kllz 1/1 Running 0 3d
tink-controller-manager-54dc786db6-tm2c5 1/1 Running 0 3d
tink-server-5c494445bc-986sl 1/1 Running 0 3d
Next, check the list of Tinkerbell machines.
If all of the machines were provisioned successfully, you should see true
under the READY column for each one.
kubectl get tinkerbellmachine -A
NAMESPACE NAME CLUSTER STATE READY INSTANCEID MACHINE
eksa-system mycluster-control-plane-template-1656099863422-pqq2q mycluster true tinkerbell://eksa-system/eksa-da04 mycluster-72p72
You can also check the machines themselves.
Watch the PHASE change from Provisioning to Provisioned to Running.
The Running phase indicates that the machine is now running as a node on the new cluster:
kubectl get machines -n eksa-system
NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION
mycluster-72p72 mycluster eksa-da04 tinkerbell://eksa-system/eksa-da04 Running 7m25s v1.22.10-eks-1-22-8
Once you have confirmed that all your machines are successfully running as nodes on the target cluster, there is not much for Tinkerbell to do.
It stays around to continue running the DHCP service and to be available to add more machines to the cluster.
5.2.4 - Customize HookOS for EKS Anywhere on Bare Metal
Customizing HookOS for EKS Anywhere on Bare Metal
To initially network boot bare metal machines used in EKS Anywhere clusters, Tinkerbell acquires a kernel and initial ramdisk that is referred to as the HookOS.
A default HookOS is provided when you create an EKS Anywhere cluster.
However, there may be cases where you want to override the default HookOS, such as to add drivers required to boot your particular type of hardware.
The following procedure describes how to get the Tinkerbell stack’s Hook/Linuxkit OS built locally.
For more information on Tinkerbell’s Hook Installation Environment, see the Tinkerbell Hook repo
.
-
Clone the hook repo or your fork of that repo:
git clone https://github.com/tinkerbell/hook.git
cd hook/
-
Pull down the commit that EKS Anywhere is tracking for Hook:
git checkout -b <new-branch> 029ef8f0711579717bfd14ac5eb63cdc3e658b1d
NOTE: This commit number can be obtained from the EKS-A build tooling repo
.
-
Make changes shown in the following diff
in the Makefile
located in the root of the repo using your favorite editor.
diff --git a/Makefile b/Makefile
index e7fd844..8e87c78 100644
--- a/Makefile
+++ b/Makefile
@@ -2,7 +2,7 @@
### !!NOTE!!
# If this is changed then a fresh output dir is required (`git clean -fxd` or just `rm -rf out`)
# Handling this better shows some of make's suckiness compared to newer build tools (redo, tup ...) where the command lines to tools invoked isn't tracked by make
-ORG := quay.io/tinkerbell
+ORG := localhost:5000/tinkerbell
# makes sure there's no trailing / so we can just add them in the recipes which looks nicer
ORG := $(shell echo "${ORG}" | sed 's|/*$$||')
Changes above change the ORG variable to use a local registry (localhost:5000
)
-
Make changes shown in the following diff
in the rules.mk
located in the root of the repo using your favorite editor.
diff --git a/rules.mk b/rules.mk
index b2c5133..64e32b1 100644
--- a/rules.mk
+++ b/rules.mk
@@ -22,7 +22,7 @@ ifeq ($(ARCH),aarch64)
ARCH = arm64
endif
-arches := amd64 arm64
+arches := amd64
modes := rel dbg
hook-bootkit-deps := $(wildcard hook-bootkit/*)
@@ -87,13 +87,12 @@ push-hook-bootkit push-hook-docker:
docker buildx build --platform $$platforms --push -t $(ORG)/$(container):$T $(container)
.PHONY: dist
-dist: out/$T/rel/amd64/hook.tar out/$T/rel/arm64/hook.tar ## Build tarballs for distribution
+dist: out/$T/rel/amd64/hook.tar ## Build tarballs for distribution
dbg-dist: out/$T/dbg/$(ARCH)/hook.tar ## Build debug enabled tarball
dist dbg-dist:
for f in $^; do
case