This the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Amazon EKS Anywhere

EKS Anywhere provides a means of managing Kubernetes clusters using the same operational excellence and practices that Amazon Web Services uses for its Amazon Elastic Kubernetes Service (Amazon EKS). Based on EKS Distro, EKS Anywhere adds methods for deploying, using, and managing Kubernetes clusters that run in your own data centers. Its goal is to include full lifecycle management of multiple Kubernetes clusters that are capable of operating completely independently of any AWS services.

The tenets of the EKS Anywhere project are:

  • Simple: Make using a Kubernetes distribution simple and boring (reliable and secure).
  • Opinionated Modularity: Provide opinionated defaults about the best components to include with Kubernetes, but give customers the ability to swap them out
  • Open: Provide open source tooling backed, validated and maintained by Amazon
  • Ubiquitous: Enable customers and partners to integrate a Kubernetes distribution in the most common tooling.
  • Stand Alone: Provided for use anywhere without AWS dependencies
  • Better with AWS: Enable AWS customers to easily adopt additional AWS services

1 - Overview

Provides an overview of EKS Anywhere

EKS Anywhere uses the eksctl executable to create a Kubernetes cluster in your environment. Currently it allows you to create and delete clusters in a vSphere environment. You can run cluster create and delete commands from an Ubuntu or Mac administrative machine.

To create a cluster, you need to create a specification file that includes all of your vSphere details and information about your EKS cluster. Running the eksctl anywhere create cluster command from your admin machine creates the workload cluster in vSphere. It does this by first creating a temporary bootstrap cluster to direct the workload cluster creation. Once the workload cluster is created, the cluster management resources are moved to your workload cluster and the local bootstrap cluster is deleted.

Once your workload cluster is created, a KUBECONFIG file is stored on your admin machine with RBAC admin permissions for the workload cluster. You’ll be able to use that file with kubectl to set up and deploy workloads. For a detailed description, see Cluster creation workflow . Here’s a diagram that explains the process visually.

EKS Anywhere create cluster overview

The delete process is similar to the create process.

EKS Anywhere delete cluster overview

Next steps:

2 - Getting started

The Getting started section includes information on starting to set up your own EKS Anywhere local or production environment.

EKS Anywhere can be deployed as a simple, unsupported local environment or as a production-quality environment that can become a supported on-premises Kubernetes platform. This section lists the different ways to set up and run EKS Anywhere. When you install EKS Anywhere, choose an installation type based on: ease of maintenance, security, control, available resources, and expertise required to operate and manage a cluster.

Install EKS Anywhere

To create an EKS Anywhere cluster you’ll need to download the command line tool that is used to create and manage a cluster. You can install it using the installation guide

Local environment

If you just want to try out EKS Anywhere, there is a single-system method for installing and running EKS Anywhere using Docker. See EKS Anywhere local environment .

Production environment

When evaluating a solution for a production environment consider deploying EKS Anywhere on vSphere .

2.1 - Install EKS Anywhere

EKS Anywhere will create and manage Kubernetes clusters on multiple providers. Currently we support creating development clusters locally with Docker and production clusters using VMware vSphere. Other deployment targets will be added in the future, including bare metal support in 2022.

Creating an EKS Anywhere cluster begins with setting up an Administrative machine where you will run Docker and add some binaries. From there, you create the cluster for your chosen provider. See Create cluster workflow for an overview of the cluster creation process.

To create an EKS Anywhere cluster you will need eksctl and the eksctl-anywhere plugin. This will let you create a cluster in multiple providers for local development or production workloads.

Administrative machine prerequisites

  • Docker 20.x.x
  • Mac OS (10.15) / Ubuntu (20.04.2 LTS)
  • 4 CPU cores
  • 16GB memory
  • 30GB free disk space

NOTE: If you are using Ubuntu use the Docker CE installation instructions to install Docker and not the Snap installation.

Install EKS Anywhere CLI tools

Via Homebrew (macOS and Linux)

You can install eksctl and eksctl-anywhere with homebrew . This package will also install kubectl and the aws-iam-authenticator which will be helpful to test EKS clusters.

brew install aws/tap/eks-anywhere

Manually (macOS and Linux)

Install the latest release of eksctl. The EKS Anywhere plugin requires eksctl version 0.66.0 or newer.

curl "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" \
    --silent --location \
    | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin/

Install the eksctl-anywhere plugin.

export EKSA_RELEASE="0.6.0" OS="$(uname -s | tr A-Z a-z)" RELEASE_NUMBER=2
curl "https://anywhere-assets.eks.amazonaws.com/releases/eks-a/${RELEASE_NUMBER}/artifacts/eks-a/v${EKSA_RELEASE}/${OS}/eksctl-anywhere-v${EKSA_RELEASE}-${OS}-amd64.tar.gz" \
    --silent --location \
    | tar xz ./eksctl-anywhere
sudo mv ./eksctl-anywhere /usr/local/bin/

Upgrade eksctl-anywhere

If you installed eksctl-anywhere via homebrew you can upgrade the binary with

brew update
brew upgrade eks-anywhere

If you installed eksctl-anywhere manually you should follow the installation steps to download the latest release.

You can verify your installed version with

eksctl anywhere version

Deploy a cluster

Once you have the tools installed you can deploy a local cluster or production cluster in the next steps.

2.2 - Create local cluster

EKS Anywhere docker provider deployments

EKS Anywhere supports a Docker provider for development and testing use cases only. This allows you to try EKS Anywhere on your local system before deploying to a supported provider.

To install the EKS Anywhere binaries and see system requirements please follow the installation guide .

Steps

  1. Generate a cluster config

    CLUSTER_NAME=dev-cluster
    eksctl anywhere generate clusterconfig $CLUSTER_NAME \
       --provider docker > $CLUSTER_NAME.yaml
    
  2. Create a cluster

    eksctl anywhere create cluster -f $CLUSTER_NAME.yaml
    

    Example command output

    Performing setup and validations
    βœ… validation succeeded {"validation": "docker Provider setup is valid"}
    Creating new bootstrap cluster
    Installing cluster-api providers on bootstrap cluster
    Provider specific setup
    Creating new workload cluster
    Installing networking on workload cluster
    Installing cluster-api providers on workload cluster
    Moving cluster management from bootstrap to workload cluster
    Installing EKS-A custom components (CRD and controller) on workload cluster
    Creating EKS-A CRDs instances on workload cluster
    Installing AddonManager and GitOps Toolkit on workload cluster
    GitOps field not specified, bootstrap flux skipped
    Deleting bootstrap cluster
    πŸŽ‰ Cluster created!
    
  3. Use the cluster

    Once the cluster is created you can use it with the generated KUBECONFIG file in your local directory

    export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
    kubectl get ns
    

    Example command output

    NAME                                STATUS   AGE
    capd-system                         Active   21m
    capi-kubeadm-bootstrap-system       Active   21m
    capi-kubeadm-control-plane-system   Active   21m
    capi-system                         Active   21m
    capi-webhook-system                 Active   21m
    cert-manager                        Active   22m
    default                             Active   23m
    eksa-system                         Active   20m
    kube-node-lease                     Active   23m
    kube-public                         Active   23m
    kube-system                         Active   23m
    

    You can now use the cluster like you would any Kubernetes cluster. Deploy the test application with:

    kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
    

    Verify the test application in the deploy test application section . See the Cluster management section with more information on common operational tasks like scaling and deleting the cluster.

2.3 - Create production cluster

EKS Anywhere supports a vSphere provider for production grade EKS Anywhere deployments. EKS Anywhere allows you to provision and manage Amazon EKS on your own infrastructure.

This document walks you through setting up EKS Anywhere in a way that:

  • Deploys a management cluster on your vSphere environment
  • Deploys one or more workload clusters from the management cluster
  • Keeps the management cluster in place so you can use it later to modify, upgrade, and delete workload clusters

Prerequisite Checklist

EKS Anywhere needs to be run on an administrative machine that has certain machine requirements . An EKS Anywhere deployment will also require the availability of certain resources from your VMware vSphere deployment .

Steps

  1. Generate a management cluster config:

    CLUSTER_NAME=mgmt
    eksctl anywhere generate clusterconfig $CLUSTER_NAME \
       --provider vsphere > eksa-mgmt-cluster.yaml
    
  2. Modify the management cluster config (eksa-mgmt-cluster.yaml) as follows:

    • Refer to vsphere configuration for information on configuring this cluster config for a vSphere provider.
    • Create at least two control plane nodes, three worker nodes, and three etcd nodes for a production cluster, to provide high availability and rolling upgrades.
    • Optionally, configure the cluster for OIDC , etcd , proxy , gitops and/or a container registry mirror .
  3. Set Credential Environment Variables

    Before you create a workload cluster, you will need to set and export these environment variables for your vSphere user name and password. Make sure you use single quotes around the values so that your shell does not interpret the values:

    export EKSA_VSPHERE_USERNAME='billy'
    export EKSA_VSPHERE_PASSWORD='t0p$ecret'
    
  4. Create a management cluster

    After you have created your eksa-mgmt-cluster.yaml and set your credential environment variables, you will be ready to create a management cluster:

    eksctl anywhere create cluster -f eksa-mgmt-cluster.yaml
    
  5. Once the management cluster is created you can use it with the generated KUBECONFIG file in your local directory:

    export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
    
  6. Check the management cluster nodes:

    To check that the cluster completed, list the machines to see the control plane, etcd, and worker nodes:

    kubectl get machines -A
    

    Example command output

    NAMESPACE   NAME                PROVIDERID        PHASE    VERSION
    eksa-system mgmt-b2xyz          vsphere:/xxxxx    Running  v1.21.2-eks-1-21-5
    eksa-system mgmt-etcd-r9b42     vsphere:/xxxxx    Running  
    eksa-system mgmt-md-8-6xr-rnr   vsphere:/xxxxx    Running  v1.21.2-eks-1-21-5
    ...
    

    The etcd machine doesn’t show the Kubernetes version because it doesn’t run the kubelet service.

  7. Check the management cluster CRD:

    To ensure you are looking at the management cluster, list the CRD to see that the name of its management cluster is itself:

    kubectl get clusters mgmt -o yaml
    

    Example command output

    ...
    kubernetesVersion: "1.21"
    managementCluster:
      name: mgmt
    workerNodeGroupConfigurations:
    ...
    
  8. Generate a workload cluster config:

    CLUSTER_NAME=w01
    eksctl anywhere generate clusterconfig $CLUSTER_NAME \
       --provider vsphere > eksa-w01-cluster.yaml
    

    Refer to the management config described earlier for the required and optional settings. The main difference is that you must have a new cluster name and cannot use the same vSphere resources.

  9. Create a workload cluster

    To create a new workload cluster from your management cluster run this command, identifying:

    • The workload cluster yaml file
    • The management cluster credentials (this causes the workload cluster to be managed from the management cluster)
    eksctl anywhere create cluster \
        -f eksa-w01-cluster.yaml  \
        --kubeconfig mgmt/mgmt-eks-a-cluster.kubeconfig
    

    As noted earlier, adding the --kubeconfig option tells eksctl to use the management cluster identified by that kubeconfig file to create a different workload cluster.

  10. Check the workload cluster:

    You can now use the workload cluster as you would any Kubernetes cluster. Change your credentials to point to the new workload cluster (for example, mgmt-w01), then run the test application with:

    export CLUSTER_NAME=mgmt-w01
    export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
    kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
    

    Verify the test application in the deploy test application section .

  11. Add more workload clusters:

    To add more workload clusters, go through the same steps for creating the initial workload, copying the config file to a new name (such as eksa-w02-cluster.yaml), modifying resource names, and running the create cluster command again.

See the Cluster management section with more information on common operational tasks like scaling and deleting the cluster.

3 - Concepts

The Concepts section will describe the components and overall architecture of EKS Anywhere.

Most of the content of this section will cover how EKS Anywhere deploys, upgrades and otherwise manages Kubernetes clusters. It will point to Kubernetes documentation for specifics on how Kubernetes itself works.

3.1 - Compare EKS Anywhere and EKS

Comparing Amazon EKS Anywhere features to Amazon EKS

Amazon EKS Anywhere is a new deployment option for Amazon EKS that enables you to easily create and operate Kubernetes clusters on-premises. EKS Anywhere provides an installable software package for creating and operating Kubernetes clusters on-premises and automation tooling for cluster lifecycle support. To learn more, see EKS Anywhere .

Amazon Elastic Kubernetes Service (Amazon EKS) is a managed Kubernetes service that makes it easy for you to run Kubernetes on the AWS cloud. Amazon EKS is certified Kubernetes conformant, so existing applications that run on upstream Kubernetes are compatible with Amazon EKS. To learn more about Amazon EKS, see Amazon Elastic Kubernetes Service .

Comparing Amazon EKS Anywhere to Amazon EKS

Feature Amazon EKS Anywhere Amazon EKS
Control plane
K8s control plane management Managed by customer Managed by AWS
K8s control plane location Customer’s datacenter AWS cloud
Cluster updates Manual CLI updates for control plane. Flux supported rolling updates for data plane. Managed in-place updates for control plane and managed rolling updates for data plane.
Compute
Compute options VMware vSphere Amazon EC2, AWS Fargate
Supported node operating systems BottleRocket, Ubuntu Amazon Linux 2, Windows Server, Bottlerocket, Ubuntu
Physical hardware (servers, network equipment, storage, etc.) Managed by customer Managed by AWS
Serverless Not supported Amazon EKS on AWS Fargate
Management
Command line interface (CLI) eksctl (OSS command line tool) eksctl (OSS command line tool)
Console view for Kubernetes objects Optional EKS console connection using EKS Connector (public preview) Native EKS console connection
Infrastructure-as-code Cluster manifest, Kubernetes controllers, 3rd-party solutions AWS CloudFormation, 3rd-party solutions
Logging and monitoring 3rd-party solutions CloudWatch, CloudTrail, 3rd-party solutions
GitOps Flux controller Flux controller
Functions and tooling
Networking and Security Cilium CNI and network policy supported Amazon VPC CNI supported. Calico supported for network policy. Other compatible 3rd-party CNI plugins available.
Load balancer 3rd-party solutions Elastic Load Balancing including Application Load Balancer (ALB), and Network Load Balancer (NLB)
Service mesh Community or 3rd-party solutions AWS App Mesh, community, or 3rd-party solutions
Community tools and Helm Works with compatible community tooling and helm charts. Works with compatible community tooling and helm charts.
Pricing and support
Control plane pricing Free to download, paid support subscription option Hourly pricing per cluster
AWS Support Additional annual subscription (per cluster) for AWS support Basic support included. Included in paid AWS support plans (developer, business, and enterprise)

3.2 - Cluster creation workflow

Explanation of the process of creating an EKS Anywhere cluster

The EKS Anywhere cluster creation process makes it easy not only to bring up a cluster initially, but also to update configuration settings and to upgrade Kubernetes versions going forward. The EKS Anywhere cluster versions match the same Kubernetes distribution versions that are used in the AWS EKS cloud service.

Each EKS Anywhere cluster is built from a cluster specification file, with the structure of the configuration file based on the target provider for the cluster. Currently, VMware vSphere is the recommended provider for supported EKS Anywhere clusters. So, vSphere is the example provider we step through here.

This document provides an in-depth description of the process of creating an EKS Anywhere cluster. It starts by describing the components to put in place before creating the cluster. Then it shows you what happens at each step of the process. After that, the document describes the attributes of the resulting cluster.

Before cluster creation

Some assets need to be in place before you can create an EKS Anywhere cluster. You need to have an Administrative machine that includes the tools required to create the cluster. Next, you need get the software tools and artifacts used to build the cluster. Then you also need to prepare the provider, in this case a vCenter environment, on which to create the resulting cluster.

Administrative machine

The Administrative machine is needed to provide:

  • A place to run the commands to create and manage the workload cluster.
  • A Docker container runtime to run a temporary, local bootstrap cluster that creates the resulting workload cluster.
  • A place to hold the kubeconfig file needed to perform administrative actions using kubectl. (The kubeconfig file is stored in the root of the folder created during cluster creation.)

The Administrative machine can be any computer (such as your local laptop) with a supported operating system that meets the requirements. It must also have Internet access to the places where the command line tools and EKS Anywhere artifacts are made available. Likewise, the Administrative machine must be able to reach and have access to the provider (vSphere). See the Install EKS Anywhere guide for Administrative machine requirements.

EKS Anywhere software

To obtain EKS Anywhere software, you need Internet access to the repositories holding that software. EKS Anywhere does not currently support the use of private registries and repositories for the software that EKS Anywhere needs to draw on during cluster creation at this time. EKS Anywhere software is divided into two types of components. The CLI interface for managing clusters and the cluster components and controllers used to run workloads and configure clusters. The software you need to obtain includes:

The sites to which the administrative machine and the target workload environment need access are listed in the Requirements section. If you are operating behind a firewall that limits access to the Internet, you can configure EKS Anywhere to identify the location of the proxy service you choose to connect to the Internet.

For more information on the software used in EKS Distro, which includes the Kubernetes release and related software in EKS Anywhere, see the EKS Distro Releases GitHub page. For information on the Ubuntu and Bottlerocket operating systems used with EKS Anywhere, see the EKS Anywhere Artifacts page.

Providers

EKS Anywhere uses an infrastructure provider model for creating, upgrading, and managing Kubernetes clusters that leverages the Kubernetes Cluster API project. The first supported EKS Anywhere provider, VMware vSphere, is implemented based on the Kubernetes Cluster API Provider vsphere (CAPV) specifications.

Like Cluster API, EKS Anywhere runs a kind cluster on the local Administrative machine to act as a bootstrap cluster. However, instead of using CAPI directly with the clusterctl command to manage the workload cluster, you use the eksctl anywhere command which abstracts that process for you, including calling clusterctl under the covers.

As for other providers, the EKS Anywhere project documents the Cluster API Provider Docker (CAPD) , but doesn’t support it for production use. Expect other providers to be supported for EKS Anywhere in the future. If you are interested in EKS Anywhere supporting a different provider, feel free to create an an issue on Github for consideration.

With your Administrative machine in place, to prepare the vSphere provider for EKS Anywhere you need to make sure your vSphere environment meets the EKS Anywhere requirements and that permissions set up properly. If you don’t want to use the default OVA images, you can import the OVAs representing the operating systems and Kubernetes releases you want.

Creating a cluster

With the provider (vSphere) prepared and the Administrative machine set up to run Docker and the required binaries, you can create an EKS Anywhere cluster. This section steps through an example of an EKS Anywhere cluster being created on a vSphere provider. Once you understand this process, you can use the following documentation to create your own cluster:

Starting the process

To start, the eksctl anywhere command is used to generate a cluster config file, which you can then modify and use to create the cluster. The following diagram illustrates what happens when you start the cluster creation process:

Start creating EKS Anywhere cluster

1. Generate an EKS Anywhere config file

When you run eksctl anywhere generate clusterconfig, the two pieces of information you provide are the name of the cluster ($CLUSTER_NAME) and the type of provider (-p vsphere, in this example). Then you can direct the yaml cluster config output into a file (> $CLUSTER_NAME.yaml). For example:

eksctl anywhere generate clusterconfig $CLUSTER_NAME -p vpshere > $CLUSTER_NAME.yaml

The provider is important because the type of cluster config created is based on the provider. The docker provider is the only other (although unsupported for production use) provider documented with EKS Anywhere.

The result of this command is a config file template that you need to modify for the specific instance of your provider.

2. Modify the EKS Anywhere config file

Using the generated cluster config file, make modifications to suit your situation. Details about this config file are contained in the vSphere Config There are several things to consider when modifying the cluster config file:

Pay particular attention to which settings are optional and which are required. Also, not all properties can be upgraded, so it is important to get those settings right at cluster creation. See supported cluster properties, related to GitOps and eksctl anywhere upgrade methods of cluster upgrades, for information on which properties can be modified after initial cluster creation.

3. Launch the cluster creation

Once you have modified the cluster configuration file, use eksctl anywhere cluster create -f $CLUSTER_NAME.yaml as described in the production environment section to start the cluster creation process. To see details on the cluster creation process, you can increase the verbosity (-v=9 provides maximum verbosity).

4. Authenticate and create bootstrap cluster

After authenticating to vSphere and validating the assets there, the cluster creation process starts off creating a temporary Kubernetes bootstrap cluster on the Administrative machine. If you are watching the output of eksctl anywhere cluster create for those steps, you should see something similar to the following:

To begin, the cluster creation process runs a series of govc commands to check on the vSphere environment. First, it checks that the vSphere environment is available:

Performing setup and validations
βœ… Connected to server

Using the URL and credentials provided in the cluster spec files, it authenticates to the vSphere provider:

βœ… Authenticated to vSphere

It validates the datacenter exists:

βœ… Datacenter validated

It validates that the datacenter network exists:

βœ… Network validated

It validates that the identified datastore (to store your EKS Anywhere cluster) exists, that the folder holding your EKS Anywhere cluster VMs exists, and that the resource pools containing compute resources exist. If you have multiple VSphereMachineConfig objects in your config file, will see these validations repeated:

βœ… Datastore validated
βœ… Folder validated
βœ… Resource pool validated

It validates the virtual machine templates to be used for the control plane and worker nodes (such as ubuntu-2004-kube-v1.20.7):

βœ… Control plane and Workload templates validated

If all those validations passed, you will see this message:

βœ… Vsphere Provider setup is valid

Next, the process runs the kind command to build a single-node Kubernetes bootstrap cluster on the Administrative machine. This includes pulling the kind node image, preparing the node, writing the configuration, starting the control-plane, installing CNI, and installing the StorageClass. You will see:

Creating new bootstrap cluster

After this point the bootstrap cluster is installed, but not yet fully configured.

Continuing cluster creation

If all goes well, the cluster should be created from the eksctl anywhere cluster create command and the config file you provided without any further actions from you. The following diagram illustrates the activities that occur next:

Continue creating EKS Anywhere cluster

1. Add CAPI management

Cluster API (CAPI) management is added to the bootstrap cluster to direct the creation of the workload cluster.

2. Set up cluster

Configure the control plane and worker nodes.

3. Add Cilium networking

Add Cilium as the CNI plugin to use for networking between the cluster services and pods.

4. Add storage

Add the default storage class to the cluster

5. Add CAPI to workload cluster

Add the CAPI service to the workload cluster in preparation for it to take over management of the cluster after the cluster creation is completed and the bootstrap cluster is deleted. The bootstrap cluster can then begin moving the CAPI objects over to the workload cluster, so it can take over the management of itself.

The following text continues to follow along with the output from eksctl anywhere cluster create as just described.

Installs the CAPI service on the bootstrap node:

Installing cluster-api providers on bootstrap cluster

Performs provider-specific setup for core components. For the default configuration, you should see these: etcdadm-bootstrap, etcdadm-controller, control-plane-kubeadm, and infrastructure-vsphere and sets up cert-manager. The CAPI controller-manager is also configured:

Provider specific setup

With the bootstrap cluster running and configured on the Administrative machine, the creation of the workload cluster begins. It uses kubectl to apply a workload cluster configuration. Then it waits for etcd, the control plane, and the worker nodes to be ready:

Creating new workload cluster

Once etcd, the control plane, and the worker nodes are ready, it applies the networking configuration to the workload cluster:

Installing networking on workload cluster

Next, the default storage class is installed on the workload cluster:

Installing storage class on workload cluster

After that, the CAPI providers are configured on the workload cluster, in preparation for the workload cluster to take over responsibilities for running the components needed to manage the itself.

Installing cluster-api providers on workload cluster

With CAPI running on the workload cluster, CAPI objects for the workload cluster are moved from the bootstrap cluster to the workload cluster’s CAPI service (done internally with the clusterctl command):

Moving cluster management from bootstrap to workload cluster

At this point, the cluster creation process will add Kubernetes CRDs and other addons that are specific to EKS Anywhere. That configuration is applied directly to the cluster:

Installing EKS-A custom components (CRD and controller) on workload cluster
Creating EKS-A CRDs instances on workload cluster
Installing AddonManager and GitOps Toolkit on workload cluster

If you did not specify GitOps support, starting the flux service is skipped:

GitOps field not specified, bootstrap flux skipped

The cluster configuration is saved:

Writing cluster config file

With the workload cluster up, and the CAPI service running on the workload cluster, the bootstrap cluster is no longer needed and is deleted:

Delete EKS Anywhere bootstrap cluster

Deleting bootstrap cluster

Cluster creation is complete:

πŸŽ‰ Cluster created!

At this point, the workload cluster is ready to use, both to run workloads and to accept requests to change, update, or upgrade the cluster itself. You can continue to use eksctl anywhere to manage your cluster, with EKS Anywhere handling the fact that CAPI management is now being fulfilled from the workload cluster instead of the bootstrap cluster.

After cluster creation

With the EKS Anywhere cluster up and running, you might be interested to know how your cluster is set up and what it is composed of. The following sections describe different aspects of an EKS Anywhere cluster on a vSphere provider and what you should know about them going forward.

See Add integrations for information on example third-party tools for adding features to EKS Anywhere.

Networking

Networking features of your EKS Anywhere cluster start with how virtual machines in the EKS-A cluster in vSphere are set up. The current state of networking on the vSphere node level include the following:

  • DHCP: EKS Anywhere requires that a DHCP server be available to the control plane and worker nodes in vSphere for them to obtain their IP addresses. There is currently no support for static IP addresses or multi-network clusters. All control plane and nodes are on the same network.
  • CAPI endpoint: A static IP address should have been assigned to the control plane configuration endpoint, to provide access to the Cluster API. It should have been set up to not conflict with any other node IP addresses in the cluster. This is a specific requirement of CAPI, not EKS Anywhere.
  • Proxy server: If a proxy server was identified to the EKS Anywhere workload cluster, that server should have inbound access from the cluster nodes and outbound access to the internet.

Networking for the cluster itself has the following attributes:

  • Cilium CNI: The Cilium Kubernetes CNI is used to provide networking between components of the control plane and data plane components. No other CNI plugins, including Cilium Enterprise, is supported at this time.
  • Pod/Service IP ranges: Separate IP address blocks were assigned from the configuration file during cluster creation for the Pods network and Services network managed by Cilium. Refer to the clusterNetwork section of your configuration file to see how the cidrBlocks for pods and services were set.

Networking setups for accessing cluster resources on your running EKS Anywhere cluster include the following documented features:

  • Load balancers: You can add external load balancers to your EKS Anywhere cluster. EKS Anywhere project documents how to configure KubeVip and MetalLB .
  • Ingress controller: You can add a Kubernetes ingress controller to EKS Anywhere. The project documents the use of Emissary-ingress ingress controller.

Operating systems

The Ubuntu or Mac operating system representing the Administrative machine can continue to use the binaries to manage the EKS anywhere cluster. You may need to update those binaries (kubectl, eksctl anywhere, and others) from time to time.

In the workload cluster itself, the operating system on each node is provided from either Bottlerocket or Ubuntu OVAs. Note that it is not recommended that you add software or change the configuration of these systems once they are running in the cluster. In fact, Bottlerocket contains limited writeable areas and does not include a software package management system.

If you need to modify an operating system, you can rebuild an Ubuntu OVA to use with EKS Anywhere. In other words, all operating system changes should be done before the OVA is added to your EKS Anywhere cluster.

Authentication

Supported authentication types are listed in the AuthN / AuthZ section of the EKS Anywhere FAQ.

Storage

The amount of storage assigned to each virtual machine is 25GiB, by default. It could be different in your case if you had changed the diskGiB field in the EKS Anywhere config. As for application storage, EKS Anywhere configures a default storage class and supports adding compatible Container Storage Interface (CSI) drivers to a running workload cluster. See Kubernetes Storage for details.

4 - Tasks

Common actions and set-up you may need for EKS-A

4.1 - Workload management

Common tasks for managing workloads.

4.1.1 - Deploy test workload

How to deploy a workload to check that your cluster is working properly

We’ve created a simple test application for you to verify your cluster is working properly. You can deploy it with the following command:

kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"

To see the new pod running in your cluster, type:

kubectl get pods -l app=hello-eks-a

Example output:

NAME                                     READY   STATUS    RESTARTS   AGE
hello-eks-a-745bfcd586-6zx6b   1/1     Running   0          22m

To check the logs of the container to make sure it started successfully, type:

kubectl logs -l app=hello-eks-a

There is also a default web page being served from the container. You can forward the deployment port to your local machine with

kubectl port-forward deploy/hello-eks-a 8000:80

Now you should be able to open your browser or use curl to http://localhost:8000 to view the page example application.

curl localhost:8000

Example output:

⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒

Thank you for using

β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•—  β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—
β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•”β•β•β•β•β•
β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β• β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—
β–ˆβ–ˆβ•”β•β•β•  β–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•— β•šβ•β•β•β•β–ˆβ–ˆβ•‘
β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•‘
β•šβ•β•β•β•β•β•β•β•šβ•β•  β•šβ•β•β•šβ•β•β•β•β•β•β•

 β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ•—   β–ˆβ–ˆβ•—β–ˆβ–ˆβ•—   β–ˆβ–ˆβ•—β–ˆβ–ˆβ•—    β–ˆβ–ˆβ•—β–ˆβ–ˆβ•—  β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—
β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ•—  β–ˆβ–ˆβ•‘β•šβ–ˆβ–ˆβ•— β–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•‘    β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β•β•β•
β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β–ˆβ–ˆβ•— β–ˆβ–ˆβ•‘ β•šβ–ˆβ–ˆβ–ˆβ–ˆβ•”β• β–ˆβ–ˆβ•‘ β–ˆβ•— β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—  
β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β•šβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘  β•šβ–ˆβ–ˆβ•”β•  β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β•  β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β•  
β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β•šβ–ˆβ–ˆβ–ˆβ–ˆβ•‘   β–ˆβ–ˆβ•‘   β•šβ–ˆβ–ˆβ–ˆβ•”β–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—
β•šβ•β•  β•šβ•β•β•šβ•β•  β•šβ•β•β•β•   β•šβ•β•    β•šβ•β•β•β•šβ•β•β• β•šβ•β•  β•šβ•β•β•šβ•β•β•β•β•β•β•β•šβ•β•  β•šβ•β•β•šβ•β•β•β•β•β•β•

You have successfully deployed the hello-eks-a pod hello-eks-a-c5b9bc9d8-qp6bg

For more information check out
https://anywhere.eks.amazonaws.com

⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒

If you would like to expose your applications with an external load balancer or an ingress controller, you can follow the steps in Adding an external load balancer .

4.1.2 - Add an external load balancer

How to deploy a load balancer controller to expose a workload running in EKS Anywhere

A production-quality Kubernetes cluster requires planning and preparation for various networking features.

The purpose of this document is to walk you through getting set up with a recommended Kubernetes Load Balancer for EKS Anywhere. Load Balancing is essential in order to maximize availability and scalability. It enables efficient distribution of incoming network traffic among multiple backend services.

4.1.2.1 - RECOMMENDED: Kube-Vip for Service-type Load Balancer

How to set up kube-vip for Service-type Load Balancer (Recommended)

We recommend using Kube-Vip cloud controller to expose your services as service-type Load Balancer. Detailed information about Kube-Vip can be found here .

There are two ways Kube-Vip can manage virtual IP addresses on your network. Please see the following guides for ARP or BGP mode depending on your on-prem networking preferences.

Setting up Kube-Vip for Service-type Load Balancer

Kube-Vip Service-type Load Balancer can be set up in either ARP mode or BGP mode

4.1.2.1.1 - Kube-Vip ARP Mode

How to set up kube-vip for Service-type Load Balancer in ARP mode

In ARP mode, kube-vip will perform leader election and assign the Virtual IP to the leader. This node will inherit the VIP and become the load-balancing leader within the cluster.

Setting up Kube-Vip for Service-type Load Balancer in ARP mode

  1. Enable strict ARP in kube-proxy as it’s required for kube-vip

    kubectl get configmap kube-proxy -n kube-system -o yaml | \
    sed -e "s/strictARP: false/strictARP: true/" | \
    kubectl apply -f - -n kube-system
    
  2. Create a configMap to specify the IP range for load balancer. You can use either a CIDR block or an IP range

    CIDR=192.168.0.0/24 # Use your CIDR range here
    kubectl create configmap --namespace kube-system kubevip --from-literal cidr-global=${CIDR}
    
    IP_START=192.168.0.0  # Use the starting IP in your range
    IP_END=192.168.0.255  # Use the ending IP in your range
    kubectl create configmap --namespace kube-system kubevip --from-literal range-global=${IP_START}-${IP_END}
    
  3. Deploy kubevip-cloud-provider

    kubectl apply -f https://kube-vip.io/manifests/controller.yaml
    
  4. Create ClusterRoles and RoleBindings for kube-vip DaemonSet

    kubectl apply -f https://kube-vip.io/manifests/rbac.yaml
    
  5. Create the kube-vip DaemonSet

    An example manifest has been included at the end of this document which you can use in place of this step.

    alias kube-vip="docker run --network host --rm plndr/kube-vip:v0.3.5"
    kube-vip manifest daemonset --services --inCluster --arp --interface eth0 | kubectl apply -f -
    
  6. Deploy the Hello EKS Anywhere test application.

    kubectl apply -f https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml
    
  7. Expose the hello-eks-a service

    kubectl expose deployment hello-eks-a --port=80 --type=LoadBalancer --name=hello-eks-a-lb
    
  8. Describe the service to get the IP. The external IP will be the one in CIDR range specified in step 4

    EXTERNAL_IP=$(kubectl get svc hello-eks-a-lb -o jsonpath='{.spec.externalIP}')
    
  9. Ensure the load balancer is working by curl’ing the IP you got in step 8

    curl ${EXTERNAL_IP}
    

You should see something like this in the output

   ⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑

   Thank you for using

   β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•—  β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—                                             
   β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•”β•β•β•β•β•                                             
   β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β• β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—                                             
   β–ˆβ–ˆβ•”β•β•β•  β–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•— β•šβ•β•β•β•β–ˆβ–ˆβ•‘                                             
   β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•‘                                             
   β•šβ•β•β•β•β•β•β•β•šβ•β•  β•šβ•β•β•šβ•β•β•β•β•β•β•                                             
                                                                     
    β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ•—   β–ˆβ–ˆβ•—β–ˆβ–ˆβ•—   β–ˆβ–ˆβ•—β–ˆβ–ˆβ•—    β–ˆβ–ˆβ•—β–ˆβ–ˆβ•—  β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—
   β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ•—  β–ˆβ–ˆβ•‘β•šβ–ˆβ–ˆβ•— β–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•‘    β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β•β•β•
   β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β–ˆβ–ˆβ•— β–ˆβ–ˆβ•‘ β•šβ–ˆβ–ˆβ–ˆβ–ˆβ•”β• β–ˆβ–ˆβ•‘ β–ˆβ•— β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—  
   β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β•šβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘  β•šβ–ˆβ–ˆβ•”β•  β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β•  β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β•  
   β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β•šβ–ˆβ–ˆβ–ˆβ–ˆβ•‘   β–ˆβ–ˆβ•‘   β•šβ–ˆβ–ˆβ–ˆβ•”β–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—
   β•šβ•β•  β•šβ•β•β•šβ•β•  β•šβ•β•β•β•   β•šβ•β•    β•šβ•β•β•β•šβ•β•β• β•šβ•β•  β•šβ•β•β•šβ•β•β•β•β•β•β•β•šβ•β•  β•šβ•β•β•šβ•β•β•β•β•β•β•
                                                                     
   You have successfully deployed the hello-eks-a pod hello-eks-a-c5b9bc9d8-fx2fr

   For more information check out
   https://anywhere.eks.amazonaws.com

⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑

Here is an example manifest for kube-vip from step 5. Also available here

apiVersion: apps/v1
kind: DaemonSet
metadata:
  creationTimestamp: null
  name: kube-vip-ds
  namespace: kube-system
spec:
  selector:
    matchLabels:
      name: kube-vip-ds
  template:
    metadata:
      creationTimestamp: null
      labels:
        name: kube-vip-ds
    spec:
      containers:
      - args:
        - manager
        env:
        - name: vip_arp
          value: "true"
        - name: vip_interface
          value: eth0
        - name: port
          value: "6443"
        - name: vip_cidr
          value: "32"
        - name: svc_enable
          value: "true"
        - name: vip_startleader
          value: "false"
        - name: vip_addpeerstolb
          value: "true"
        - name: vip_localpeer
          value: ip-172-20-40-207:172.20.40.207:10000
        - name: vip_address
        image: plndr/kube-vip:v0.3.5
        imagePullPolicy: Always
        name: kube-vip
        resources: {}
        securityContext:
          capabilities:
            add:
            - NET_ADMIN
            - NET_RAW
            - SYS_TIME
      hostNetwork: true
      serviceAccountName: kube-vip
  updateStrategy: {}
status:
  currentNumberScheduled: 0
  desiredNumberScheduled: 0
  numberMisscheduled: 0
  numberReady: 0

4.1.2.1.2 - Kube-Vip BGP Mode

How to set up kube-vip for Service-type Load Balancer in BGP mode

In BGP mode, kube-vip will assign the Virtual IP to all running Pods. All nodes, therefore, will advertise the VIP address.

Prerequisites

  • BGP-capable network switch connected to EKS-A cluster
  • Vendor-specific BGP configuration on switch

Required BGP settings on network vendor equipment are described in BGP Configuration on Network Switch Side section below.

Setting up Kube-Vip for Service-type Load Balancer in BGP mode

  1. Create a configMap to specify the IP range for load balancer. You can use either a CIDR block or an IP range

    CIDR=192.168.0.0/24 # Use your CIDR range here
    kubectl create configmap --namespace kube-system kubevip --from-literal cidr-global=${CIDR}
    
    IP_START=192.168.0.0  # Use the starting IP in your range
    IP_END=192.168.0.255  # Use the ending IP in your range
    kubectl create configmap --namespace kube-system kubevip --from-literal range-global=${IP_START}-${IP_END}
    
  2. Deploy kubevip-cloud-provider

    kubectl apply -f https://kube-vip.io/manifests/controller.yaml
    
  3. Create ClusterRoles and RoleBindings for kube-vip Daemonset

    kubectl apply -f https://kube-vip.io/manifests/rbac.yaml
    
  4. Create the kube-vip DaemonSet

    alias kube-vip="docker run --network host --rm plndr/kube-vip:latest"
    kube-vip manifest daemonset \
        --interface lo \
        --localAS <AS#> \
        --sourceIF <src interface> \
        --services \
        --inCluster \
        --bgp \
        --bgppeers <bgp-peer1>:<peerAS>::<bgp-multiphop-true-false>,<bgp-peer2>:<peerAS>::<bgp-multihop-true-false> | kubectl apply -f -
    

    Explanation of the options provided above to kube-vip for manifest generation:

    --interface β€” This interface needs to be set to the loopback in order to suppress ARP responses from worker nodes that get the LoadBalancer VIP assigned
    --localAS β€” Local Autonomous System ID
    --sourceIF β€” source interface on the worker node which will be used to communicate BGP with the switch
    --services β€” Service Type LoadBalancer (not Control Plane)
    --inCluster β€” Defaults to looking inside the Pod for the token
    --bgp β€” Enables BGP peering from kube-vip
    --bgppeers β€” Comma separated list of BGP peers in the format <address:AS:password:multihop>
    

    Below is an example DaemonSet creation command.

    kube-vip manifest daemonset \
        --interface $INTERFACE \
        --localAS 65200 \
        --sourceIF eth0 \
        --services \
        --inCluster \
        --bgp \
        --bgppeers 10.69.20.2:65000::false,10.69.20.3:65000::false
    

    Below is the manifest generated with these example values.

    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      creationTimestamp: null
      name: kube-vip-ds
      namespace: kube-system
    spec:
      selector:
        matchLabels:
          name: kube-vip-ds
      template:
        metadata:
          creationTimestamp: null
          labels:
            name: kube-vip-ds
        spec:
          containers:
          - args:
            - manager
            env:
            - name: vip_arp
              value: "false"
            - name: vip_interface
              value: lo
            - name: port
              value: "6443"
            - name: vip_cidr
              value: "32"
            - name: svc_enable
              value: "true"
            - name: cp_enable
              value: "false"
            - name: vip_startleader
              value: "false"
            - name: vip_addpeerstolb
              value: "true"
            - name: vip_localpeer
              value: docker-desktop:192.168.65.3:10000
            - name: bgp_enable
              value: "true"
            - name: bgp_routerid
            - name: bgp_source_if
              value: eth0
            - name: bgp_as
              value: "65200"
            - name: bgp_peeraddress
            - name: bgp_peerpass
            - name: bgp_peeras
              value: "65000"
            - name: bgp_peers
              value: 10.69.20.2:65000::false,10.69.20.3:65000::false
            - name: bgp_routerinterface
              value: eth0
            - name: vip_address
            image: ghcr.io/kube-vip/kube-vip:v0.3.7
            imagePullPolicy: Always
            name: kube-vip
            resources: {}
            securityContext:
              capabilities:
                add:
                - NET_ADMIN
                - NET_RAW
                - SYS_TIME
          hostNetwork: true
          serviceAccountName: kube-vip
      updateStrategy: {}
    status:
      currentNumberScheduled: 0
      desiredNumberScheduled: 0
      numberMisscheduled: 0
      numberReady: 0
    
  5. Manually add the following to the manifest file as shown in the example above

    - name: bgp_routerinterface
      value: eth0
    
  6. Deploy the Hello EKS Anywhere test application.

    kubectl apply -f https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml
    
  7. Expose the hello-eks-a service

    kubectl expose deployment hello-eks-a --port=80 --type=LoadBalancer --name=hello-eks-a-lb
    
  8. Describe the service to get the IP. The external IP will be the one in CIDR range specified in step 4

    EXTERNAL_IP=$(kubectl get svc hello-eks-a-lb -o jsonpath='{.spec.externalIP}')
    
  9. Ensure the load balancer is working by curl’ing the IP you got in step 8

    curl ${EXTERNAL_IP}
    

You should see something like this in the output

   ⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒

   Thank you for using

   β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•—  β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—                                             
   β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•”β•β•β•β•β•                                             
   β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β• β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—                                             
   β–ˆβ–ˆβ•”β•β•β•  β–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•— β•šβ•β•β•β•β–ˆβ–ˆβ•‘                                             
   β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•‘                                             
   β•šβ•β•β•β•β•β•β•β•šβ•β•  β•šβ•β•β•šβ•β•β•β•β•β•β•                                             
                                                                     
    β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ•—   β–ˆβ–ˆβ•—β–ˆβ–ˆβ•—   β–ˆβ–ˆβ•—β–ˆβ–ˆβ•—    β–ˆβ–ˆβ•—β–ˆβ–ˆβ•—  β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—
   β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ•—  β–ˆβ–ˆβ•‘β•šβ–ˆβ–ˆβ•— β–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•‘    β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β•β•β•
   β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β–ˆβ–ˆβ•— β–ˆβ–ˆβ•‘ β•šβ–ˆβ–ˆβ–ˆβ–ˆβ•”β• β–ˆβ–ˆβ•‘ β–ˆβ•— β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—  
   β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β•šβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘  β•šβ–ˆβ–ˆβ•”β•  β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β•  β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β•  
   β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β•šβ–ˆβ–ˆβ–ˆβ–ˆβ•‘   β–ˆβ–ˆβ•‘   β•šβ–ˆβ–ˆβ–ˆβ•”β–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—
   β•šβ•β•  β•šβ•β•β•šβ•β•  β•šβ•β•β•β•   β•šβ•β•    β•šβ•β•β•β•šβ•β•β• β•šβ•β•  β•šβ•β•β•šβ•β•β•β•β•β•β•β•šβ•β•  β•šβ•β•β•šβ•β•β•β•β•β•β•
                                                                     
   You have successfully deployed the hello-eks-a pod hello-eks-a-c5b9bc9d8-fx2fr

   For more information check out
   https://anywhere.eks.amazonaws.com

   ⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒

BGP Configuration on Network Switch Side

BGP configuration will vary depending upon network vendor equipment and local network environment. Listed below are the basic conceptual configuration steps for BGP operation. Included with each step is a sample configuration from a Cisco Switch (Cisco Nexus 9000) running in NX-OS mode. You will need to find similar steps in your network vendor equipment’s manual for BGP configuration on your specific switch.

  1. Configure BGP local AS, router ID, and timers

    router bgp 65000
      router-id 10.69.5.1
      timers bgp 15 45
      log-neighbor-changes
    
  2. Configure BGP neighbors

    BGP neighbors can be configured individually or as a subnet

    a. Individual BGP neighbors

    Determine the IP addresses of each of the EKS-A nodes via VMWare console or DHCP server allocation.
    In the example below, node IP addresses are 10.69.20.165, 10.69.20.167, and 10.69.20.170.
    Note that remote-as is the AS used as the bgp_as value in the generated example manifest above.

    neighbor 10.69.20.165
        remote-as 65200
        address-family ipv4 unicast
          soft-reconfiguration inbound always
    neighbor 10.69.20.167
        remote-as 65200
        address-family ipv4 unicast
          soft-reconfiguration inbound always
    neighbor 10.69.20.170
        remote-as 65200
        address-family ipv4 unicast
          soft-reconfiguration inbound always
    

    b. Subnet-based BGP neighbors

    Determine the subnet address and netmask of the EKS-A nodes. In this example the EKS-A nodes are on 10.69.20.0/24 subnet. Note that remote-as is the AS used as the bgp_as value in the generated example manifest above.

    neighbor 10.69.20.0/24
        remote-as 65200
        address-family ipv4 unicast
          soft-reconfiguration inbound always
    
  3. Verify bgp neighbors are established with each node

    switch% show ip bgp summary
    information for VRF default, address family IPv4 Unicast
    BGP router identifier 10.69.5.1, local AS number 65000
    BGP table version is 181, IPv4 Unicast config peers 7, capable peers 7
    32 network entries and 63 paths using 11528 bytes of memory
    BGP attribute entries [16/2752], BGP AS path entries [6/48]
    BGP community entries [0/0], BGP clusterlist entries [0/0]
    3 received paths for inbound soft reconfiguration
    3 identical, 0 modified, 0 filtered received paths using 0 bytes
    
    Neighbor        V    AS MsgRcvd MsgSent   TblVer  InQ OutQ Up/Down  State/PfxRcd
    10.69.20.165    4 65200   34283   34276      181    0    0    5d20h 1
    10.69.20.167    4 65200   34543   34531      181    0    0    5d20h 1
    10.69.20.170    4 65200   34542   34530      181    0    0    5d20h 1
    
  4. Verify routes learned from EKS-A cluster match the external IP address assigned by kube-vip LoadBalancer configuration

    In the example below, 10.35.10.13 is the external kube-vip LoadBalancer IP

    switch% show ip bgp neighbors 10.69.20.165 received-routes
    
    Peer 10.69.20.165 routes for address family IPv4 Unicast:
    BGP table version is 181, Local Router ID is 10.69.5.1
    Status: s-suppressed, x-deleted, S-stale, d-dampened, h-history, *-valid, >-best
    Path type: i-internal, e-external, c-confed, l-local, a-aggregate, r-redist, I-injected
    Origin codes: i - IGP, e - EGP, ? - incomplete, | - multipath, & - backup, 2 - best2
    
       Network            Next Hop            Metric     LocPrf     Weight Path
    *>e10.35.10.13/32     10.69.20.165                                   0 65200 i
    

4.1.2.2 - Alternative: MetalLB Service-type Load Balancer

How to set up MetalLB for Service-type Load Balancer

The purpose of this document is to walk you through getting set up with MetalLB Kubernetes Load Balancer for your cluster. This is suggested as an alternative if your networking requirements do not allow you to use Kube-Vip .

MetalLB is a native Kubernetes load balancing solution for bare-metal Kubernetes clusters. Detailed information about MetalLB can be found here .

Prerequisites

You will need Helm installed on your system as this is the easiest way to deploy MetalLB. Helm can be installed from here . MetalLB installation is described here

Steps

  1. Enable strict ARP as it’s required for MetalLB

    kubectl get configmap kube-proxy -n kube-system -o yaml | \
    sed -e "s/strictARP: false/strictARP: true/" | \
    kubectl apply -f - -n kube-system
    
  2. Pull helm repo for metalLB

    helm repo add metallb https://metallb.github.io/metallb
    
  3. Create an override file to specify LB IP range

    LB-IP-RANGE can be a CIDR block like 198.18.210.0/24 or range like 198.18.210.0-198.18.210.10

    cat << 'EOF' >> values.yaml
    configInline:
      address-pools:
        - name: default
          protocol: layer2
          addresses:
          - <LB-IP-range>
    EOF
    
  4. Install metalLB on your cluster

    helm install metallb metallb/metallb -f values.yaml
    
  5. Deploy the Hello EKS Anywhere test application.

    kubectl apply -f https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml
    
  6. Expose the hello-eks-a deployment

    kubectl expose deployment hello-eks-a --port=80 --type=LoadBalancer --name=hello-eks-a-lb
    
  7. Get the load balancer external IP

    EXTERNAL_IP=$(kubectl get svc hello-eks-a-lb -o jsonpath='{.spec.externalIP}')
    
  8. Hit the external ip

    curl ${EXTERNAL_IP}
    

4.1.3 - Add an ingress controller

How to deploy an ingress controller for simple host or URL-based HTTP routing into workload running in EKS-A

A production-quality Kubernetes cluster requires planning and preparation for various networking features.

The purpose of this document is to walk you through getting set up with a recommended Kubernetes Ingress Controller for EKS Anywhere. Ingress Controller is essential in order to have routing rules that decide how external users access services running in a Kubernetes cluster. It enables efficient distribution of incoming network traffic among multiple backend services.

Current Recommendation: Emissary-ingress

We currently recommend using Emissary-ingress Kubernetes Ingress Controller by Ambassador. Emissary-ingress allows you to route and secure traffic to your cluster with an Open Source Kubernetes-native API Gateway. Detailed information about Emissary-ingress can be found here .

Setting up Emissary-ingress for Ingress Controller

  1. Deploy the Hello EKS Anywhere test application.

    kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
    
  2. Set up kube-vip service type: Load Balancer in your cluster by following the instructions here . Alternatively, you can set up MetalLB Load Balancer by following the instructions here

  3. Install Ambassador CRDs and ClusterRoles and RoleBindings

    kubectl apply -f "https://www.getambassador.io/yaml/ambassador/ambassador-crds.yaml"
    kubectl apply -f "https://www.getambassador.io/yaml/ambassador/ambassador-rbac.yaml"
    
  4. Create Ambassador Service with Type LoadBalancer.

    kubectl apply -f - <<EOF
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: ambassador
    spec:
      type: LoadBalancer
      externalTrafficPolicy: Local
      ports:
      - port: 80
        targetPort: 8080
      selector:
        service: ambassador
    EOF
    
  5. Create a Mapping on your cluster. This Mapping tells Emissary-ingress to route all traffic inbound to the /backend/ path to the quote Service.

    kubectl apply -f - <<EOF
    ---
    apiVersion: getambassador.io/v2
    kind: Mapping
    metadata:
      name: hello-backend
    spec:
      prefix: /backend/
      service: hello-eks-a
    EOF
    
  6. Store the Emissary-ingress load balancer IP address to a local environment variable. You will use this variable to test accessing your service.

    export EMISSARY_LB_ENDPOINT=$(kubectl get svc ambassador -o "go-template={{range .status.loadBalancer.ingress}}{{or .ip .hostname}}{{end}}")
    
  7. Test the configuration by accessing the service through the Emissary-ingress load balancer.

    curl -Lk http://$EMISSARY_LB_ENDPOINT/backend/
    

    NOTE: URL base path will need to match what is specified in the prefix exactly, including the trailing ‘/’

    You should see something like this in the output

    ⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒
    
    Thank you for using
    
    β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•—  β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—                                             
    β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•”β•β•β•β•β•                                             
    β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β• β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—                                             
    β–ˆβ–ˆβ•”β•β•β•  β–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•— β•šβ•β•β•β•β–ˆβ–ˆβ•‘                                             
    β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•‘                                             
    β•šβ•β•β•β•β•β•β•β•šβ•β•  β•šβ•β•β•šβ•β•β•β•β•β•β•                                             
    
     β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ•—   β–ˆβ–ˆβ•—β–ˆβ–ˆβ•—   β–ˆβ–ˆβ•—β–ˆβ–ˆβ•—    β–ˆβ–ˆβ•—β–ˆβ–ˆβ•—  β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—
    β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ•—  β–ˆβ–ˆβ•‘β•šβ–ˆβ–ˆβ•— β–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•‘    β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β•β•β•
    β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β–ˆβ–ˆβ•— β–ˆβ–ˆβ•‘ β•šβ–ˆβ–ˆβ–ˆβ–ˆβ•”β• β–ˆβ–ˆβ•‘ β–ˆβ•— β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—  
    β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β•šβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘  β•šβ–ˆβ–ˆβ•”β•  β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β•  β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β•  
    β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β•šβ–ˆβ–ˆβ–ˆβ–ˆβ•‘   β–ˆβ–ˆβ•‘   β•šβ–ˆβ–ˆβ–ˆβ•”β–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—
    β•šβ•β•  β•šβ•β•β•šβ•β•  β•šβ•β•β•β•   β•šβ•β•    β•šβ•β•β•β•šβ•β•β• β•šβ•β•  β•šβ•β•β•šβ•β•β•β•β•β•β•β•šβ•β•  β•šβ•β•β•šβ•β•β•β•β•β•β•
    
    You have successfully deployed the hello-eks-a pod hello-eks-a-c5b9bc9d8-fx2fr
    
    For more information check out
    https://anywhere.eks.amazonaws.com
    
    ⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒⬑⬒
    
    

4.1.4 - Secure connectivity with CNI and Network Policy

How to validate the setup of Cilium CNI and deploy network policies to secure workload connectivity.

EKS Anywhere uses Cilium for pod networking and security.

Cilium is installed by default as a Kubernetes CNI plugin and so is already running in your EKS Anywhere cluster.

This section provides information about:

  • Understanding Cilium components and requirements

  • Validating your Cilium networking setup.

  • Using Cilium to securing workload connectivity using Kubernetes Network Policy.

Cilium Components

The primary Cilium Agent runs as a DaemonSet on each Kubernetes node. Each cluster also includes a Cilium Operator Deployment to handle certain cluster-wide operations. For EKS Anywhere, Cilium is configured to use the Kubernetes API server as the identity store, so no etcd cluster connectivity is required.

In a properly working environment, each Kubernetes node should have a Cilium Agent pod (cilium-WXYZ) in β€œRunning” and ready (1/1) state. By default there will be two Cilium Operator pods (cilium-operator-123456-WXYZ) in β€œRunning” and ready (1/1) state on different Kubernetes nodes for high-availability.

Run the following command to ensure all cilium related pods are in a healthy state.

kubectl get pods -n kube-system | grep cilium

Example output for this command in a 3 node environment is:

kube-system   cilium-fsjmd                                1/1     Running           0          4m
kube-system   cilium-nqpkv                                1/1     Running           0          4m
kube-system   cilium-operator-58ff67b8cd-jd7rf            1/1     Running           0          4m
kube-system   cilium-operator-58ff67b8cd-kn6ss            1/1     Running           0          4m
kube-system   cilium-zz4mt                                1/1     Running           0          4m

Network Connectivity Requirements

To provide pod connectivity within an on-premises environment, the Cilium agent implements an overlay network using the GENEVE tunneling protocol. As a result, UDP port 6081 connectivity MUST be allowed by any firewall running between Kubernetes nodes running the Cilium agent.

Allowing ICMP Ping (type = 8, code = 0) as well as TCP port 4240 is also recommended in order for Cilium Agents to validate node-to-node connectivity as part of internal status reporting.

Validating Connectivity

Cilium includes a connectivity check YAML that can be deployed into a test namespace in order to validate proper installation and connectivity within a Kubernetes cluster. If the connectivity check passes, all pods created by the YAML manifest will reach β€œRunning” and ready (1/1) state. We recommend running this test only once you have multiple worker nodes in your environment to ensure you are validating cross-node connectivity.

It is important that this test is run in a dedicated namespace, with no existing network policy. For example:

kubectl create ns cilium-test
kubectl apply -n cilium-test -f https://docs.isovalent.com/v1.10/public/connectivity-check-eksa.yaml

Once all pods have started, simply checking the status of pods in this namespace will indicate whether the tests have passed:

kubectl get pods -n cilium-test

Successful test output will show all pods in a “Running” and ready (1/1) state:

NAME                                                     READY   STATUS    RESTARTS   AGE
echo-a-d576c5f8b-zlfsk                                   1/1     Running   0          59s
echo-b-787dc99778-sxlcc                                  1/1     Running   0          59s
echo-b-host-675cd8cfff-qvvv8                             1/1     Running   0          59s
host-to-b-multi-node-clusterip-6fd884bcf7-pvj5d          1/1     Running   0          58s
host-to-b-multi-node-headless-79f7df47b9-8mzbp           1/1     Running   0          58s
pod-to-a-57695cc7ff-6tqpv                                1/1     Running   0          59s
pod-to-a-allowed-cnp-7b6d5ff99f-4rhrs                    1/1     Running   0          59s
pod-to-a-denied-cnp-6887b57579-zbs2t                     1/1     Running   0          59s
pod-to-b-intra-node-hostport-7d656d7bb9-6zjrl            1/1     Running   0          57s
pod-to-b-intra-node-nodeport-569d7c647-76gn5             1/1     Running   0          58s
pod-to-b-multi-node-clusterip-fdf45bbbc-8l4zz            1/1     Running   0          59s
pod-to-b-multi-node-headless-64b6cbdd49-9hcqg            1/1     Running   0          59s
pod-to-b-multi-node-hostport-57fc8854f5-9d8m8            1/1     Running   0          58s
pod-to-b-multi-node-nodeport-54446bdbb9-5xhfd            1/1     Running   0          58s
pod-to-external-1111-56548587dc-rmj9f                    1/1     Running   0          59s
pod-to-external-fqdn-allow-google-cnp-5ff4986c89-z4h9j   1/1     Running   0          59s

Afterward, simply delete the namespace to clean-up the connectivity test:

kubectl delete ns cilium-test

Kubernetes Network Policy

By default, all Kubernetes workloads within a cluster can talk to any other workloads in the cluster, as well as any workloads outside the cluster. To enable a stronger security posture, Cilium implements the Kubernetes Network Policy specification to provide identity-aware firewalling / segmentation of Kubernetes workloads.

Network policies are defined as Kubernetes YAML specifications that are applied to a particular namespaces to describe that connections should be allowed to or from a given set of pods. These network policies are β€œidentity-aware” in that they describe workloads within the cluster using Kubernetes metadata like namespace and labels, rather than by IP Address.

Basic network policies are validated as part of the above Cilium connectivity check test.

For next steps on leveraging Network Policy, we encourage you to explore:

Additional Cilium Features

Many advanced features of Cilium are not yet enabled as part of EKS Anywhere, including: Hubble observability, DNS-aware and HTTP-Aware Network Policy, Multi-cluster Routing, Transparent Encryption, and Advanced Load-balancing.

Please contact the EKS Anywhere team if you are interested in leveraging these advanced features along with EKS Anywhere.

4.2 - Cluster management

Common tasks for managing clusters.

4.2.1 - Cluster management overview

Overview of tools and interfaces for managing EKS Anywhere clusters

The content in this page will describe the tools and interfaces available to an administrator after an EKS Anywhere cluster is up and running. It will also describe which administrative actions done:

  • Directly in Kubernetes itself (such as adding nodes with kubectl)
  • Through the EKS Anywhere API (such as deleting a cluster with eksctl).

Note that direct changes to OVAs before nodes are deployed is not yet supported. However, we are working on a solution for that issue.

4.2.2 - Verify cluster

How to verify an EKS Anywhere cluster is running properly

To verify that a cluster control plane is up and running, use the kubectl command to show that the control plane pods are all running.

kubectl get po -A -l control-plane=controller-manager
NAMESPACE                           NAME                                                             READY   STATUS    RESTARTS   AGE
capi-kubeadm-bootstrap-system       capi-kubeadm-bootstrap-controller-manager-57b99f579f-sd85g       2/2     Running   0          47m
capi-kubeadm-control-plane-system   capi-kubeadm-control-plane-controller-manager-79cdf98fb8-ll498   2/2     Running   0          47m
capi-system                         capi-controller-manager-59f4547955-2ks8t                         2/2     Running   0          47m
capi-webhook-system                 capi-controller-manager-bb4dc9878-2j8mg                          2/2     Running   0          47m
capi-webhook-system                 capi-kubeadm-bootstrap-controller-manager-6b4cb6f656-qfppd       2/2     Running   0          47m
capi-webhook-system                 capi-kubeadm-control-plane-controller-manager-bf7878ffc-rgsm8    2/2     Running   0          47m
capi-webhook-system                 capv-controller-manager-5668dbcd5-v5szb                          2/2     Running   0          47m
capv-system                         capv-controller-manager-584886b7bd-f66hs                         2/2     Running   0          47m

You may also check the status of the cluster control plane resource directly. This can be especially useful to verify clusters with multiple control plane nodes after an upgrade.

kubectl get kubeadmcontrolplanes.controlplane.cluster.x-k8s.io
NAME                       INITIALIZED   API SERVER AVAILABLE   VERSION              REPLICAS   READY   UPDATED   UNAVAILABLE
supportbundletestcluster   true          true                   v1.20.7-eks-1-20-6   1          1       1

To verify that the expected number of cluster worker nodes are up and running, use the kubectl command to show that nodes are Ready. This will confirm that the expected number of worker nodes, named following the format $CLUSTERNAME-md-0, are present.

kubectl get nodes
NAME                                           STATUS   ROLES                  AGE    VERSION
supportbundletestcluster-md-0-55bb5ccd-mrcf9   Ready    <none>                 4m   v1.20.7-eks-1-20-6
supportbundletestcluster-md-0-55bb5ccd-zrh97   Ready    <none>                 4m   v1.20.7-eks-1-20-6
supportbundletestcluster-mdrwf                 Ready    control-plane,master   5m   v1.20.7-eks-1-20-6

To test a workload in your cluster you can try deploying the hello-eks-anywhere .

4.2.3 - Add cluster integrations

How to add integrations to an EKS Anywhere cluster *

EKS Anywhere offers AWS support for certain third-party vendor components, namely Ubuntu TLS, Cilium, and Flux. It also provides flexibility for you to integrate with your choice of tools in other areas. Below is a list of example third-party tools your consideration.

For a full list of partner integration options, please visit Amazon EKS Anywhere Partner page .

Feature Example third-party tools
Ingress controller Emissary-ingress (previously Ambassador)
Service type load balancer KubeVip or MetalLB
Local container repository Harbor
Monitoring Prometheus , Grafana , Datadog , or NewRelic
Logging Splunk or Fluentbit
Secret management Hashi Vault
Policy agent Open Policy Agent
Service mesh Linkerd or Istio
Cost management KubeCost
Etcd backup and restore Velero
Storage Default storage class, any compatible CSI
  • The solutions listed on this page have not been tested by AWS and are not covered by the EKS Anywhere Support Subscription.

4.2.4 - Connect cluster to console

Connect a cluster to the EKS console

The AWS EKS Connector lets you connect your EKS Anywhere cluster to the AWS EKS console, where you can see your the EKS Anywhere cluster, its configuration, workloads, and their status. EKS Connector is a software agent that can be deployed on your EKS Anywhere cluster, enabling the cluster to register with the EKS console.

Visit AWS EKS Connector for details.

4.2.5 - Scale cluster

How to scale your cluster.

When you are scaling your EKS Anywhere cluster, consider the number of nodes you need for your control plane and for your data plane. Each plane can be scaled horizontally (add more nodes) or vertically (provide nodes with more resources). In each case you can scale the cluster manually, semi-automatically, or automatically.

See the Kubernetes Components documentation to learn the differences between the control plane and the data plane (worker nodes).

Manual cluster scaling

Horizontally scaling the cluster is done by increasing the number for the control plane or worker node groups under the Cluster specification.

NOTE: If etcd is running on your control plane (the default configuration) you should scale your control plane in odd numbers (3, 5, 7…).

apiVersion: anywhere.eks.amazonaws.com/v1
kind: Cluster
metadata:
  name: test-cluster
spec:
  controlPlaneConfiguration:
    count: 1     # increase this number to horizontally scale your control plane
...    
  workerNodeGroupsConfiguration:
  - count: 1     # increase this number to horizontally scale your data plane

Vertically scaling your cluster is done by updating the machine config spec for your infrastructure provider. For a vSphere cluster an example is

NOTE: Not all providers can be vertically scaled (e.g. bare metal)

apiVersion: anywhere.eks.amazonaws.com/v1
kind: VSphereMachineConfig
metadata:
  name: test-machine
  namespace: default
spec:
  diskGiB: 25       # increase this number to make the VM disk larger
  numCPUs: 2        # increase this number to add vCPUs to your VM
  memoryMiB: 8192   # increase this number to add memory to your VM

Once you have made configuration updates you can apply the changes to your cluster. If you are adding or removing a node, only the terminated nodes will be affected. If you are vertically scaling your nodes, then all nodes will be replaced one at a time.

eksctl anywhere upgrade cluster -f cluster.yaml

Semi-automatic scaling

Scaling your cluster in a semi-automatic way still requires changing your cluster manifest configuration. In a semi-automatic mode you change your cluster spec and then have automation make the cluster changes.

You can do this by storing your cluster config manifest in git and then having a CI/CD system deploy your changes. Or you can use a GitOps controller to apply the changes. To read more about making changes with the integrated Flux GitOps controller you can read how to Manage a cluster with GitOps .

Automatic scaling

Automatic cluster scaling is designed for worker nodes and it is not advised to automatically scale your control plane. Typically, autoscaling is done with a controller such as the Kubernetes Cluster Autoscaler . This has some concerns in an on-prem environment.

Automatic scaling does not work with some providers such as Docker or bare metal. An EKS Anywhere cluster currently is not intended to be used with the Kubernetes Cluster Autoscaler so that it does not interfere with built in controllers or cause unexpected machine thrashing.

In future versions of EKS Anywhere we will be adding support for automatic autoscaling for specific providers.

4.2.6 - Upgrade cluster

How to perform a cluster version upgrade

EKS Anywhere provides the command upgrade, which allows you to upgrade various aspects of your EKS Anywhere cluster. When you run eksctl anywhere upgrade cluster -f ./cluster.yaml, EKS Anywhere runs a set of preflight checks to ensure your cluster is ready to be upgraded. EKS Anywhere then performs the upgrade, modifying your cluster to match the updated specification. The upgrade command also upgrades core components of EKS Anywhere and lets the user enjoy the latest features, bug fixes and security patches.

Minor Version Upgrades

Kubernetes has minor releases three times per year and EKS Distro follows a similar cadence. EKS Anywhere will add support for new EKS Distro releases as they are released, and you are advised to upgrade your cluster when possible.

Cluster upgrades are not handled automatically and require administrator action to modify the cluster specification and perform an upgrade. You are advised to upgrade your clusters in development environments first and verify your workloads and controllers are compatible with the new version.

Cluster upgrades are performed in place using a rolling process (similar to Kubernetes Deployments). Upgrades can only happen one minor version at a time (e.g. 1.20 -> 1.21). Control plane components will be upgraded before worker nodes.

A new VM is created with the new version and then an old VM is removed. This happens one at a time until all the control plane components have been upgraded.

Core component upgrades

EKS Anywhere upgrade also supports upgrading the following core components:

  • Core CAPI
  • CAPI providers
  • Cert-manager
  • Etcdadm CAPI provider
  • EKS Anywhere controllers and CRDs
  • GitOps controllers (Flux) - this is an optional component, will be upgraded only if specified

The latest versions of these core EKS Anywhere components are embedded into a bundles manifest that the CLI uses to fetch the latest versions and image builds needed for each component upgrade. The command detects both component version changes and new builds of the same versioned component. If there is a new Kubernetes version that is going to get rolled out, the core components get upgraded before the Kubernetes version. Irrespective of a Kubernetes version change, the upgrade command will always upgrade the internal EKS Anywhere components mentioned above to their latest available versions. All upgrade changes are backwards compatible.

Performing a cluster upgrade

To perform a cluster upgrade you can modify your cluster specification kubernetesVersion field to the desired version.

As an example, to upgrade a cluster with version 1.20 to 1.21 you would change your spec

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
  name: dev
spec:
  controlPlaneConfiguration:
    count: 1
    endpoint:
      host: "198.18.99.49"
    machineGroupRef:
      kind: VSphereMachineConfig
      name: dev
      ...
  kubernetesVersion: "1.21"
      ...

NOTE: If you have a custom machine image for your nodes you may also need to update your vsphereMachineConfig with a new template.

and then you will run the command

eksctl anywhere upgrade cluster -f cluster.yaml

This will upgrade the cluster specification (if specified), upgrade the core components to the latest available versions and apply the changes using the provisioner controllers.

Example output:

βœ… control plane ready
βœ… worker nodes ready
βœ… nodes ready
βœ… cluster CRDs ready
βœ… cluster object present on workload cluster
βœ… upgrade cluster kubernetes version increment
βœ… validate immutable fields
πŸŽ‰ all cluster upgrade preflight validations passed
Performing provider setup and validations
Pausing EKS-A cluster controller reconcile
Pausing Flux kustomization
GitOps field not specified, pause flux kustomization skipped
Creating bootstrap cluster
Installing cluster-api providers on bootstrap cluster
Moving cluster management from workload to bootstrap cluster
Upgrading workload cluster
Moving cluster management from bootstrap to workload cluster
Applying new EKS-A cluster resource; resuming reconcile
Resuming EKS-A controller reconciliation
Updating Git Repo with new EKS-A cluster spec
GitOps field not specified, update git repo skipped
Forcing reconcile Git repo with latest commit
GitOps not configured, force reconcile flux git repo skipped
Resuming Flux kustomization
GitOps field not specified, resume flux kustomization skipped

Upgradeable Cluster Attributes

EKS Anywhere upgrade supports upgrading more than just the kubernetesVersion, allowing you to upgrade a number of fields simultaneously with the same procedure.

Upgradeable Attributes

Cluster:

  • kubernetesVersion
  • controlPlaneConfig.count
  • controlPlaneConfigurations.machineGroupRef.name
  • workerNodeGroupConfigurations.count
  • workerNodeGroupConfigurations.machineGroupRef.name
  • etcdConfiguration.externalConfiguration.machineGroupRef.name

VSphereMachineConfig:

  • VSphereMachineConfig.datastore
  • VSphereMachineConfig.diskGiB
  • VSphereMachineConfig.folder
  • VSphereMachineConfig.memoryMiB
  • VSphereMachineConfig.numCPUs
  • VSphereMachineConfig.resourcePool
  • VSphereMachineConfig.template

Troubleshooting

Attempting to upgrade a cluster with more than 1 minor release will result in receiving the following error.

βœ… validate immutable fields
❌ validation failed    {"validation": "Upgrade preflight validations", "error": "validation failed with 1 errors: WARNING: version difference between upgrade version (1.21) and server version (1.19) do not meet the supported version increment of +1", "remediation": ""}
Error: failed to upgrade cluster: validations failed

For more errors you can see the troubleshooting section .

4.2.7 - Manage cluster with GitOps

Use Flux to manage clusters with GitOps

GitOps Support (optional)

EKS Anywhere supports a GitOps workflow for the management of your cluster.

When you create a cluster with GitOps enabled, EKS Anywhere will automatically commit your cluster configuration to the provided GitHub repository and install a GitOps toolkit on your cluster which watches that committed configuration file. You can then manage the scale of the cluster by making changes to the version controlled cluster configuration file and committing the changes. Once a change is detected by the GitOps controller running in your cluster, the scale of the cluster will be adjusted to match the committed configuration file.

If you’d like to learn more about GitOps and the associated best practices, check out this introduction from Weaveworks .

NOTE: Installing a GitOps controller needs to be done during cluster creation. In the event that GitOps installation fails, EKS Anywhere cluster creation will continue.

Supported Cluster Properties

Currently, you can manage a subset of cluster properties with GitOps:

Cluster:

  • Cluster.workerNodeGroupConfigurations[0].count
  • Cluster.workerNodeGroupConfigurations[0].machineGroupRef.name

Worker Nodes:

  • VSphereMachineConfig.datastore
  • VSphereMachineConfig.diskGiB
  • VSphereMachineConfig.folder
  • VSphereMachineConfig.memoryMiB
  • VSphereMachineConfig.numCPUs
  • VSphereMachineConfig.resourcePool

Any other changes to the cluster configuration in the git repository will be ignored. If an immutable immutable field is changed in Git repository, there are two ways to find the error message:

  1. If a notification webhook is set up, check the error message in notification channel.
  2. Check the Flux Kustomization Controller log: kubectl logs -f -n flux-system kustomize-controller-****** for error message containing text similar to Invalid value: 1: field is immutable

Getting Started with EKS Anywhere GitOps

In order to use GitOps to manage cluster scaling, you need a couple of things:

Create a GitHub Personal Access Token

Create a Personal Access Token (PAT) to access your provided github repository. It must be scoped for all repo permissions.

NOTE: GitOps configuration only works with hosted github.com and will not work on self hosted GitHub Enterprise instances.

This PAT should have at least the following permissions:

GitHub PAT permissions

NOTE: The PAT must belong to the owner of the repository or, if using an organization as the owner, the creator of the PAT must have repo permission in that organization.

You need to set your PAT as the environment variable $EKSA_GITHUB_TOKEN to use it during cluster creation:

export EKSA_GITHUB_TOKEN=ghp_MyValidPersonalAccessTokenWithRepoPermissions

Create GitOps configuration repo

If you have an existing repo you can set that as your repository name in the configuration. If you specify a repo in your GitOpsConfig which does not exist EKS Anywhere will create it for you. If you would like to create a new repo you can click here to create a new repo.

If your repository contains multiple cluster specification files, store them in subfolders and specify the configuration path in your cluster specification.

In order to accommodate the management cluster feature, the CLI will now structure the repo directory following a new convention:

clusters
└── management-cluster
    β”œβ”€β”€ flux-system
    β”‚   └── ...
    β”œβ”€β”€ management-cluster
    β”‚   └── eksa-system
    β”‚       └── eksa-cluster.yaml
    β”œβ”€β”€ workload-cluster-1
    β”‚   └── eksa-system
    β”‚       └── eksa-cluster.yaml
    └── workload-cluster-2
        └── eksa-system
            └── eksa-cluster.yaml

By default, Flux kustomization reconciles at the management cluster’s root level (./clusters/management-cluster), so both the management cluster and all of the workload clusters it manages are synced.

Example GitOps cluster configuration

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
  name: mynewgitopscluster
spec:
... # collapsed cluster spec fields
# Below added for gitops support
  gitOpsRef:
    kind: GitOpsConfig
    name: my-cluster-name
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: GitOpsConfig
metadata:
  name: my-cluster-name
spec:
  flux:
    github:
      personal: true
      repository: mygithubrepository
      owner: mygithubusername

Create a GitOps enabled cluster

Generate your cluster configuration and add the GitOps configuration. For a full spec reference see the Cluster Spec reference .

NOTE: After your cluster is created the cluster configuration will automatically be committed to your git repo.

  1. Create an EKS Anywhere cluster with GitOps enabled.

    CLUSTER_NAME=gitops
    eksctl anywhere create cluster -f ${CLUSTER_NAME}.yaml
    

Test GitOps controller

After your cluster is created you can test the GitOps controller by modifying the cluster specification.

  1. Clone your git repo and modify the cluster specification. The default path for the cluster file is:

    clusters/$CLUSTER_NAME/eksa-system/eksa-cluster.yaml
    
  2. Modify the workerNodeGroupsConfigurations[0].count field with your desired changes.

  3. Commit the file to your git repository

    git add eksa-cluster.yaml
    git commit -m 'Scaling nodes for test'
    git push origin main
    
  4. The flux controller will automatically make the required changes.

    If you updated your node count you can use this command to see the current node state.

    kubectl get nodes 
    

4.2.8 - Delete cluster

How to delete an EKS Anywhere cluster

To delete a cluster you will need:

  • cluster name or cluster configuration
  • kubeconfig for your cluster

Run the following commands to delete the cluster:

  1. Set up CLUSTER_NAME and KUBECONFIG environment variables:

    export CLUSTER_NAME=dev
    export KUBECONFIG=${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
    
  2. Run the delete command:

  • If you are running the delete command from the directory which has the cluster folder with ${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.yaml:

    eksctl anywhere delete cluster ${CLUSTER_NAME}
    
  • Otherwise, use this command to manually specify the clusterconfig file path:

    export CONFIG_FILE=<path-to-config-file>
    eksctl anywhere delete cluster -f ${CONFIG_FILE}
    

Example output:

Performing provider setup and validations
Creating management cluster
Installing cluster-api providers on management cluster
Moving cluster management from workload cluster
Deleting workload cluster
Clean up Git Repo
GitOps field not specified, clean up git repo skipped
πŸŽ‰ Cluster deleted!

This will delete all of the VMs that were created in your provider. If your workloads created external resources such as external DNS entries or load balancer endpoints you may need to delete those resources manually.

4.3 - Cluster troubleshooting

Troubleshooting your EKS Anywhere Cluster

4.3.1 - Generating a Support Bundle

Using the Support Bundle with your EKS Anywhere Cluster

This guide covers the use of the EKS Anywhere Support Bundle for troubleshooting and support. This allows you to gather cluster information, save it to your administrative machine, and perform analysis of the results.

EKS Anywhere leverages troubleshoot.sh to collect and analyze kubernetes cluster logs, cluster resource information, and other relevant debugging information.

EKS Anywhere has two Support Bundle commands:

eksctl anywhere generate support-bundle will execute a support bundle on your cluster, collecting relevant information, archiving it locally, and performing analysis of the results.

eksctl anywhere generate support-bundle-config will generate a Support Bundle config yaml file for you to customize.

Do not add personally identifiable information (PII) or other confidential or sensitive information to your support bundle. If you provide the support bundle to get support from AWS, it will be accessible to other AWS services, including AWS Support.

Collecting a Support Bundle and running analyzers

eksctl anywhere generate support-bundle

generate support-bundle will allow you to quickly collect relevant logs and cluster resources and save them locally in an archive file. This archive can then be used to aid in further troubleshooting and debugging.

If you provide a cluster configuration file containing your cluster spec using the -f flag, generate support-bundle will customize the auto-generated support bundle collectors and analyzers to match the state of your cluster.

If you provide a support bundle configuration file using the --bundle-config flag, for example one generated with generate support-bundle-config, generate support-bundle will use the provided configuration when collecting information from your cluster and analyzing the results.

Flags:
      --bundle-config string   Bundle Config file to use when generating support bundle
  -f, --filename string        Filename that contains EKS-A cluster configuration
  -h, --help                   help for support-bundle
      --since string           Collect pod logs in the latest duration like 5s, 2m, or 3h.
      --since-time string      Collect pod logs after a specific datetime(RFC3339) like 2021-06-28T15:04:05Z
  -w, --w-config string        Kubeconfig file to use when creating support bundle for a workload cluster

Collecting and analyzing a bundle

You only need to run a single command to generate a support bundle, collect information and analyze the output: eksctl anywhere generate support-bundle -f myCluster.yaml

This command will collect the information from your cluster and run an analysis of the collected information.

The collected information will be saved to your local disk in an archive which can be used for debugging and obtaining additional in-depth support.

The analysis will be printed to your console.

Collect phase:

$ ./bin/eksctl anywhere generate support-bundle -f ./testcluster100.yaml
 Collecting support bundle cluster-info
 Collecting support bundle cluster-resources
 Collecting support bundle secret
 Collecting support bundle logs
 Analyzing support bundle

Analysis phase:

 Analyze Results
------------
Check PASS
Title: gitopsconfigs.anywhere.eks.amazonaws.com
Message: gitopsconfigs.anywhere.eks.amazonaws.com is present on the cluster

------------
Check PASS
Title: vspheredatacenterconfigs.anywhere.eks.amazonaws.com
Message: vspheredatacenterconfigs.anywhere.eks.amazonaws.com is present on the cluster

------------
Check PASS
Title: vspheremachineconfigs.anywhere.eks.amazonaws.com
Message: vspheremachineconfigs.anywhere.eks.amazonaws.com is present on the cluster

------------
Check PASS
Title: capv-controller-manager Status
Message: capv-controller-manager is running.

------------
Check PASS
Title: capv-controller-manager Status
Message: capv-controller-manager is running.

------------
Check PASS
Title: coredns Status
Message: coredns is running.

------------
Check PASS
Title: cert-manager-webhook Status
Message: cert-manager-webhook is running.

------------
Check PASS
Title: cert-manager-cainjector Status
Message: cert-manager-cainjector is running.

------------
Check PASS
Title: cert-manager Status
Message: cert-manager is running.

------------
Check PASS
Title: capi-kubeadm-control-plane-controller-manager Status
Message: capi-kubeadm-control-plane-controller-manager is running.

------------
Check PASS
Title: capi-kubeadm-bootstrap-controller-manager Status
Message: capi-kubeadm-bootstrap-controller-manager is running.

------------
Check PASS
Title: capi-controller-manager Status
Message: capi-controller-manager is running.

------------
Check PASS
Title: capi-controller-manager Status
Message: capi-controller-manager is running.

------------
Check PASS
Title: capi-kubeadm-control-plane-controller-manager Status
Message: capi-kubeadm-control-plane-controller-manager is running.

------------
Check PASS
Title: capi-kubeadm-control-plane-controller-manager Status
Message: capi-kubeadm-control-plane-controller-manager is running.

------------
Check PASS
Title: capi-kubeadm-bootstrap-controller-manager Status
Message: capi-kubeadm-bootstrap-controller-manager is running.

------------
Check PASS
Title: clusters.anywhere.eks.amazonaws.com
Message: clusters.anywhere.eks.amazonaws.com is present on the cluster

------------
Check PASS
Title: bundles.anywhere.eks.amazonaws.com
Message: bundles.anywhere.eks.amazonaws.com is present on the cluster

------------

Archive phase:

a support bundle has been created in the current directory:	{"path": "support-bundle-2021-09-02T19_29_41.tar.gz"}

Generating a custom Support Bundle configuration for your EKS Anywhere Cluster

EKS Anywhere will automatically generate a support bundle based on your cluster configuration; however, if you’d like to customize the support bundle to collect specific information, you can generate your own support bundle configuration yaml for EKS Anywhere to run on your cluster.

eksctl anywhere generate support-bundle-config will generate a default support bundle configuration and print it as yaml.

eksctl anywhere generate support-bundle-config -f myCluster.yaml will generate a support bundle configuration customized to your cluster and print it as yaml.

To run a customized support bundle configuration yaml file on your cluster, save this output to a file and run the command eksctl anywhere generate support-bundle using the flag --bundle-config.

eksctl anywhere generate support-bundle-config
Flags:
  -f, --filename string   Filename that contains EKS-A cluster configuration
  -h, --help              help for support-bundle-config

4.3.2 - Troubleshooting

Troubleshooting EKS Anywhere clusters

This guide covers some generic troubleshooting techniques and then cover more detailed examples. You may want to search this document for a fragment of the error you are seeing.

Increase eksctl anywhere output

If you’re having trouble running eksctl anywhere you may get more verbose output with the -v 6 option. The highest level of verbosity is -v 9 and the default level of logging is level equivalent to -v 0.

Cannot run docker commands

The EKS Anywhere binary requires access to run docker commands without using sudo. If you’re using a Linux distribution you will need to be using Docker 20.x.x add your user needs to be part of the docker group.

To add your user to the docker group you can use.

sudo usermod -a -G docker $USER

Now you need to log out and back in to get the new group permissions.

Minimum requirements for docker version have not been met

Error: failed to validate docker: minimum requirements for docker version have not been met. Install Docker version 20.x.x or above

Ensure you are running Docker 20.x.x for example:

% docker --version
Docker version 20.10.6, build 370c289

ECR access denied

Error: failed to create cluster: unable to initialize executables: failed to setup eks-a dependencies: Error response from daemon: pull access denied for public.ecr.aws/***/cli-tools, repository does not exist or may require 'docker login': denied: Your authorization token has expired. Reauthenticate and try again.

All images needed for EKS Anywhere are public and do not need authentication. Old cached credentials could trigger this error. Remove cached credentials by running:

docker logout public.ecr.aws

EKSA_VSPHERE_USERNAME is not set or is empty

❌ Validation failed	{"validation": "vsphere Provider setup is valid", "error": "failed setup and validations: EKSA_VSPHERE_USERNAME is not set or is empty", "remediation": ""}

Two environment variables need to be set and exported in your environment to create clusters successfully. Be sure to use single quotes around your user name and password to avoid shell manipulation of these values.

export EKSA_VSPHERE_USERNAME='<vSphere-username>'
export EKSA_VSPHERE_PASSWORD='<vSphere-password>'

vSphere authentication failed

❌ Validation failed	{"validation": "vsphere Provider setup is valid", "error": "error validating vCenter setup: vSphere authentication failed: govc: ServerFaultCode: Cannot complete login due to an incorrect user name or password.\n", "remediation": ""}
Error: failed to create cluster: validations failed

Two environment variables need to be set and exported in your environment to create clusters successfully. Be sure to use single quotes around your user name and password to avoid shell manipulation of these values.

export EKSA_VSPHERE_USERNAME='<vSphere-username>'
export EKSA_VSPHERE_PASSWORD='<vSphere-password>'

Error: old cluster config file exists under my-cluster, please use a different clusterName to proceed

Error: old cluster config file exists under my-cluster, please use a different clusterName to proceed

The my-cluster directory already exists in the current directory. Either use a different cluster name or move the directory.

failed to create cluster: node(s) already exist for a cluster with the name

Performing provider setup and validations
Creating new bootstrap cluster
Error create bootstrapcluster	{"error": "error creating bootstrap cluster: error executing create cluster: ERROR: failed to create cluster: node(s) already exist for a cluster with the name \"cluster-name\"\n, try rerunning with --force-cleanup to force delete previously created bootstrap cluster"}
Failed to create cluster	{"error": "error creating bootstrap cluster: error executing create cluster: ERROR: failed to create cluster: node(s) already exist for a cluster with the name \"cluster-name\"\n, try rerunning with --force-cleanup to force delete previously created bootstrap cluster"}ry rerunning with --force-cleanup to force delete previously created bootstrap cluster"}

A bootstrap cluster already exists with the same name. If you are sure the cluster is not being used, you may use the --force-cleanup option to eksctl anywhere to delete the cluster or you may delete the cluster with kind delete cluster --name <cluster-name>. If you do not have kind installed, you may use docker stop to stop the docker container running the KinD cluster.

Bootstrap cluster fails to come up

If your bootstrap cluster has problems you may get detailed logs by looking at the files created under the ${CLUSTER_NAME}/logs folder. The capv-controller-manager log file will surface issues with vsphere specific configuration while the capi-controller-manager log file might surface other generic issues with the cluster configuration passed in.

You may also access the logs from your bootstrap cluster directly as below:

export KUBECONFIG=${PWD}/${CLUSTER_NAME}/generated/${CLUSTER_NAME}.kind.kubeconfig
kubectl logs -f -n capv-system -l control-plane="controller-manager" -c manager

It also might be useful to start a shell session on the docker container running the bootstrap cluster by running docker ps and then docker exec -it <container-id> bash the kind container.

Memory or disk resource problem

There are various disk and memory issues that can cause problems. Make sure docker is configured with enough memory. Make sure the system wide Docker memory configuration provides enough RAM for the bootstrap cluster.

Make sure you do not have unneeded KinD clusters running kind get clusters. You may want to delete unneeded clusters with kind delete cluster --name <cluster-name>. If you do not have kind installed, you may install it from https://kind.sigs.k8s.io/ or use docker ps to see the KinD clusters and docker stop to stop the cluster.

Make sure you do not have any unneeded Docker containers running with docker ps. Terminate any unneeded Docker containers.

Make sure Docker isn’t out of disk resources. If you don’t have any other docker containers running you may want to run docker system prune to clean up disk space.

You may want to restart Docker. To restart Docker on Ubuntu sudo systemctl restart docker.

Issues detected with selected template

Issues detected with selected template. Details: - -1:-1:VALUE_ILLEGAL: No supported hardware versions among [vmx-15]; supported: [vmx-04, vmx-07, vmx-08, vmx-09, vmx-10, vmx-11, vmx-12, vmx-13].

Our upstream dependency on CAPV makes it a requirement that you use vSphere 6.7 update 3 or newer. Make sure your ESXi hosts are also up to date.

Waiting for cert-manager to be available… Error: timed out waiting for the condition

Failed to create cluster {"error": "error initializing capi resources in cluster: error executing init: Fetching providers\nInstalling cert-manager Version=\"v1.1.0\"\nWaiting for cert-manager to be available...\nError: timed out waiting for the condition\n"}

This is likely a Memory or disk resource problem . You can also try using techniques from Generic cluster unavailable .

Timed out waiting for the condition on deployments/capv-controller-manager

Failed to create cluster {"error": "error initializing capi in bootstrap cluster: error waiting for capv-controller-manager in namespace capv-system: error executing wait: error: timed out waiting for the condition on deployments/capv-controller-manager\n"}

Debug this problem using techniques from Generic cluster unavailable .

Timed out waiting for the condition on clusters/

Failed to create cluster {"error": "error waiting for workload cluster control plane to be ready: error executing wait: error: timed out waiting for the condition on clusters/test-cluster\n"}

This can be an issue with the number of control plane and worker node replicas defined in your cluster yaml file. Try to start off with a smaller number (3 or 5 is recommended for control plane) in order to bring up the cluster.

This error can also occur because your vCenter server is using self-signed certificates and you have insecure set to true in the generated cluster yaml. To check if this is the case, run the commands below:

export KUBECONFIG=${PWD}/${CLUSTER_NAME}/generated/${CLUSTER_NAME}.kind.kubeconfig
kubectl get machines

If all the machines are in Provisioning phase, this is most likely the issue. To resolve the issue, set insecure to false and thumbprint to the TLS thumbprint of your vCenter server in the cluster yaml and try again.

"msg"="discovered IP address"

The aforementioned log message can also appear with an address value of the controlplane in either of the ${CLUSTER_NAME}/logs/capv-controller-manager.log file or the capv-controller-manager pod log which can be extracted with the following command,

export KUBECONFIG=${PWD}/${CLUSTER_NAME}/generated/${CLUSTER_NAME}.kind.kubeconfig
kubectl logs -f -n capv-system -l control-plane="controller-manager" -c manager

Make sure you are choosing an ip in your network range that does not conflict with other VMs. https://anywhere.eks.amazonaws.com/docs/reference/clusterspec/vsphere/#controlplaneconfigurationendpointhost-required

The connection to the server localhost:8080 was refused

Performing provider setup and validations
Creating new bootstrap cluster
Installing cluster-api providers on bootstrap cluster
Error initializing capi in bootstrap cluster	{"error": "error waiting for capi-kubeadm-control-plane-controller-manager in namespace capi-kubeadm-control-plane-system: error executing wait: The connection to the server localhost:8080 was refused - did you specify the right host or port?\n"}
Failed to create cluster	{"error": "error waiting for capi-kubeadm-control-plane-controller-manager in namespace capi-kubeadm-control-plane-system: error executing wait: The connection to the server localhost:8080 was refused - did you specify the right host or port?\n"}

This is likely a Memory or disk resource problem .

Generic cluster unavailable

Troubleshoot more by inspecting bootstrap cluster or workload cluster (depending on the stage of failure) using kubectl commands.

kubectl get pods -A β€”kubeconfig=<kubeconfig>
kubectl get nodes -A β€”kubeconfig=<kubeconfig>
kubectl get logs <podname> -n <namespace> --kubeconfig=<kubeconfig>
....

Capv troubleshooting guide: https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/blob/master/docs/troubleshooting.md#debugging-issues

Workload VM is created on vSphere but can not power on

A similar issue is the VM does power on but does not show any logs on the console and does not have any IPs assigned.

This issue can occur if the resourcePool that the VM uses does not have enough CPU or memory resources to run a VM. To resolve this issue, increase the CPU and/or memory reservations or limits for the resourcePool.

Workload VMs start but Kubernetes not working properly

If the workload VMs start, but Kubernetes does not start or is not working properly, you may want to log onto the VMs and check the logs there. If Kubernetes is at least partially working, you may use kubectl to get the IPs of the nodes:

kubectl get nodes -o=custom-columns="NAME:.metadata.name,IP:.status.addresses[2].address"

If Kubernetes is not working at all, you can get the IPs of the VMs from vCenter or using govc.

When you get the external IP you can ssh into the nodes using the private ssh key associated with the public ssh key you provided in your cluster configuration:

ssh -i <ssh-private-key> <ssh-username>@<external-IP>

create command stuck on Creating new workload cluster

There can we a few reasons if the create command is stuck on Creating new workload cluster for over 30 min. First, check the vSphere UI to see if any workload VM are created.

If any VMs are created, check to see if they have any IPv4 IPs assigned to them.

If there are no IPv4 IPs assigned to them, this is most likely because you don’t have a DHCP server configured for the network configured in the cluster config yaml. Ensure that you have DHCP running and run the create command again.

If there are any IPv4 IPs assigned, check if one of the VMs have the controlPlane IP specified in Cluster.spec.controlPlaneConfiguration.endpoint.host in the clusterconfig yaml. If this IP is not present on any control plane VM, make sure the network has access to the following endpoints:

  • public.ecr.aws
  • anywhere-assets.eks.amazonaws.com (to download the EKS Anywhere binaries, manifests and OVAs)
  • distro.eks.amazonaws.com (to download EKS Distro binaries and manifests)
  • d2glxqk2uabbnd.cloudfront.net (for EKS Anywhere and EKS Distro ECR container images)
  • api.github.com (only if GitOps is enabled)

If no VMs are created, check the capi-controller-manager, capv-controller-manager and capi-kubeadm-control-plane-controller-manager logs using the commands mentioned in Generic cluster unavailable section.

Cluster Deletion Fails

If cluster deletion fails, you may need to manually delete the VMs associated with the cluster. The VMs should be named with the cluster name. You can power off and delete from disk using the vCenter web user interface. You may also use govc:

govc find -type VirtualMachine --name '<cluster-name>*'

This will give you a list of virtual machines that should be associated with your cluster. For each of the VMs you want to delete run:

VM_NAME=vm-to-destroy
govc vm.power -off -force $VM_NAME
govc object.destroy $VM_NAME

Troubleshooting GitOps integration

Cluster creation failure leaves outdated cluster configuration in GitHub.com repository

Failed cluster creation can sometimes leave behind cluster configuration files committed to your GitHub.com repository. Make sure to delete these configuration files before you re-try eksctl anywhere create cluster. If these configuration files are not deleted, GitOps installation will fail but cluster creation will continue.

They’ll generally be located under the directory clusters/$CLUSTER_NAME if you used the default path in your flux gitops config. Delete the entire directory named $CLUSTER_NAME.

Cluster creation failure leaves empty GitHub.com repository

Failed cluster creation can sometimes leave behind a completely empty GitHub.com repository. This can cause the GitOps installation to fail if you re-try the creation of a cluster which uses this repository. If cluster creation failure leaves behind an empty github repository, please manually delete the created GitHub.com repository before attempting cluster creation again.

Changes not syncing to cluster

Please remember that the only fields currently supported for GitOps are:

Cluster

  • Cluster.workerNodeGroupConfigurations[0].count
  • Cluster.workerNodeGroupConfigurations[0].machineGroupRef.name

Worker Nodes

  • VsphereMachineConfig.diskGiB
  • VsphereMachineConfig.numCPUs
  • VsphereMachineConfig.memoryMiB
  • VsphereMachineConfig.template
  • VsphereMachineConfig.datastore
  • VsphereMachineConfig.folder
  • VsphereMachineConfig.resourcePool

If you’ve changed these fields and they’re not syncing to the cluster as you’d expect, check out the logs of the pod in the source-controller deployment in the flux-system namespaces. If flux is having a problem connecting to your GitHub repository the problem will be logged here.

$ kubectl get pods -n flux-system
NAME                                       READY   STATUS    RESTARTS   AGE
helm-controller-7d644b8547-k8wfs           1/1     Running   0          4h15m
kustomize-controller-7cf5875f54-hs2bt      1/1     Running   0          4h15m
notification-controller-776f7d68f4-v22kp   1/1     Running   0          4h15m
source-controller-7c4555748d-7c7zb         1/1     Running   0          4h15m
$ kubectl logs source-controller-7c4555748d-7c7zb -n flux-system

A well behaved flux pod will simply log the ongoing reconciliation process, like so:

{"level":"info","ts":"2021-07-01T19:58:51.076Z","logger":"controller.gitrepository","msg":"Reconciliation finished in 902.725344ms, next run in 1m0s","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"GitRepository","name":"flux-system","namespace":"flux-system"}
{"level":"info","ts":"2021-07-01T19:59:52.012Z","logger":"controller.gitrepository","msg":"Reconciliation finished in 935.016754ms, next run in 1m0s","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"GitRepository","name":"flux-system","namespace":"flux-system"}
{"level":"info","ts":"2021-07-01T20:00:52.982Z","logger":"controller.gitrepository","msg":"Reconciliation finished in 970.03174ms, next run in 1m0s","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"GitRepository","name":"flux-system","namespace":"flux-system"}

If there are issues connecting to GitHub, you’ll instead see exceptions in the source-controller log stream. For example, if the deploy key used by flux has been deleted, you’d see something like this:

{"level":"error","ts":"2021-07-01T20:04:56.335Z","logger":"controller.gitrepository","msg":"Reconciler error","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"GitRepository","name":"flux-system","namespace":"flux-system","error":"unable to clone 'ssh://git@github.com/youruser/gitops-vsphere-test', error: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain"}

Other ways to troubleshoot GitOps integration

If you’re still having problems after deleting any empty EKS Anywhere created GitHub repositories and looking at the source-controller logs. You can look for additional issues by checking out the deployments in the flux-system and eksa-system namespaces and ensure they’re running and their log streams are free from exceptions.

$ kubectl get deployments -n flux-system
NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
helm-controller           1/1     1            1           4h13m
kustomize-controller      1/1     1            1           4h13m
notification-controller   1/1     1            1           4h13m
source-controller         1/1     1            1           4h13m
$ kubectl get deployments -n eksa-system
NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
eksa-controller-manager   1/1     1            1           4h13m

5 - Reference

Reference documents for EKS Anywhere configuration

5.1 - Config

Config reference for EKS Anywhere clusters

5.1.1 - vSphere configuration

Full EKS Anywhere configuration reference for a VMware vSphere cluster.

This is a generic template with detailed descriptions below for reference

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
   name: my-cluster-name
spec:
   clusterNetwork:
      cni: "cilium"
      pods:
         cidrBlocks:
            - 192.168.0.0/16
      services:
         cidrBlocks:
            - 10.96.0.0/12
   controlPlaneConfiguration:
      count: 1
      endpoint:
         host: ""
      machineGroupRef:
        kind: VSphereMachineConfig
        name: my-cluster-machines
   datacenterRef:
      kind: VSphereDatacenterConfig
      name: my-cluster-datacenter
   externalEtcdConfiguration:
     count: 3
     machineGroupRef:
        kind: VSphereMachineConfig
        name: my-cluster-machines
   kubernetesVersion: "1.21"
   workerNodeGroupConfigurations:
   - count: 1
     machineGroupRef:
       kind: VSphereMachineConfig
       name: my-cluster-machines

---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: VSphereDatacenterConfig
metadata:
   name: my-cluster-datacenter
spec:
  datacenter: ""
  server: ""
  network: ""
  insecure:
  thumbprint: ""

---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: VSphereMachineConfig
metadata:
   name: my-cluster-machines
spec:
  diskGiB:
  datastore: ""
  folder: ""
  numCPUs:
  memoryMiB:
  osFamily: ""
  resourcePool: ""
  storagePolicyName: ""
  template: ""
  users:
  - name: ""
    sshAuthorizedKeys:
    - ""

Cluster Fields

The following additional optional configuration can also be included.

name (required)

Name of your cluster my-cluster-name in this example

clusterNetwork (required)

Specific network configuration for your Kubernetes cluster.

clusterNetwork.cni (required)

CNI plugin to be installed in the cluster. The only supported value at the moment is cilium.

clusterNetwork.pods.cidrBlocks[0] (required)

Subnet used by pods in CIDR notation. Please note that only 1 custom pods CIDR block specification is permitted.

clusterNetwork.services.cidrBlocks[0] (required)

Subnet used by services in CIDR notation. Please note that only 1 custom services CIDR block specification is permitted.

controlPlaneConfiguration (required)

Specific control plane configuration for your Kubernetes cluster.

controlPlaneConfiguration.count (required)

Number of control plane nodes

controlPlaneConfiguration.machineGroupRef (required)

Refers to the Kubernetes object with vsphere specific configuration for your nodes. See VSphereMachineConfig Fields below.

controlPlaneConfiguration.endpoint.host (required)

A unique IP you want to use for the control plane VM in your EKS Anywhere cluster. Choose an IP in your network range that does not conflict with other VMs.

NOTE: This IP should be outside the network DHCP range as it is a floating IP that gets assigned to one of the control plane nodes for kube-apiserver loadbalancing. Suggestions on how to ensure this IP does not cause issues during cluster creation process are here

workerNodeGroupsConfiguration (required)

This takes in a list of node groups that you can define for your workers. Please note that at this time only 1 node group is permitted.

workerNodeGroupsConfiguration[0].count (required)

Number of worker nodes

workerNodeGroupsConfiguration[0].machineGroupRef (required)

Refers to the Kubernetes object with vsphere specific configuration for your nodes. See VSphereMachineConfig Fields below.

externalEtcdConfiguration.count

Number of etcd members

externalEtcdConfiguration.machineGroupRef

Refers to the Kubernetes object with vsphere specific configuration for your etcd members. See VSphereMachineConfig Fields below.

datacenterRef

Refers to the Kubernetes object with vsphere environment specific configuration. See VSphereDatacenterConfig Fields below.

kubernetesVersion (required)

The Kubernetes version you want to use for your cluster. Supported values: 1.20, 1.21

VSphereDatacenterConfig Fields

datacenter (required)

The vSphere datacenter to deploy the EKS Anywhere cluster on. For example SDDC-Datacenter.

network (required)

The VM network to deploy your EKS Anywhere cluster on.

server (required)

The vCenter server fully qualified domain name or IP address. If the server IP is used, the thumbprint must be set or insecure must be set to true.

insecure (optional)

Set insecure to true if the vCenter server does not have a valid certificate. (Default: false)

thumbprint (required if insecure=false)

The SHA1 thumbprint of the vCenter server certificate which is only required if you have a self signed certificate.

There are several ways to obtain your vCenter thumbprint. The easiest way is if you have govc installed, you can run:

govc about.cert -thumbprint -k

Another way is from the vCenter web UI, go to Administration/Certificate Management and click view details of the machine certificate. The format of this thumbprint does not exactly match the format required though and you will need to add : to separate each hexadecimal value.

Another way to get the thumbprint is use this command with your servers certificate in a file named ca.crt:

openssl x509 -sha1 -fingerprint -in ca.crt -noout

If you specify the wrong thumbprint, an error message will be printed with the expected thumbprint. If no valid certificate is being used, insecure must be set to true.

VSphereMachineConfig Fields

memoryMiB (optional)

Size of RAM on virtual machines (Default: 8192)

numCPUs (optional)

Number of CPUs on virtual machines (Default: 2)

osFamily (optional)

Operating System on virtual machines. Permitted values: ubuntu, bottlerocket (Default: bottlerocket)

diskGiB (optional)

Size of disk on virtual machines if snapshots aren’t included (Default: 25)

users (optional)

The users you want to configure to access your virtual machines. Only one is permitted at this time

users[0].name (optional)

The name of the user you want to configure to access your virtual machines through ssh.

The default is ec2-user if osFamily=bottlrocket and capv if osFamily=ubuntu

users[0].sshAuthorizedKeys (optional)

The SSH public keys you want to configure to access your virtual machines through ssh (as described below). Only 1 is supported at this time.

users[0].sshAuthorizedKeys[0] (optional)

This is the SSH public key that will be placed in authorized_keys on all EKS Anywhere cluster VMs so you can ssh into them. The user will be what is defined under name above. For example:

ssh -i <private-key-file> <user>@<VM-IP>

The default is generating a key in your $(pwd)/<cluster-name> folder when not specifying a value

template (optional)

The VM template to use for your EKS Anywhere cluster. This template was created when you imported the OVA file into vSphere . This is a required field if you are using Bottlerocket OVAs.

datastore (required)

The vSphere datastore to deploy your EKS Anywhere cluster on.

folder (required)

The VM folder for your EKS anywhere cluster VMs. This allows you to organize your VMs. If the folder does not exist, it will be created for you. If the folder is blank, the VMs will go in the root folder.

resourcePool (required)

The vSphere Resource pools for your VMs in the EKS Anywhere cluster. Examples of resource pool values include:

  • If there is no resource pool: /<datacenter>/host/<cluster-name>/Resources
  • If there is a resource pool: /<datacenter>/host/<cluster-name>/Resources/<resource-pool-name>
  • The wild card option */Resources also often works.

storagePolicyName (optional)

The storage policy name associated with your VMs.

5.1.2 - etcd configuration

EKS Anywhere cluster yaml etcd specification reference

There are two types of etcd topologies for configuring a Kubernetes cluster:

  • Stacked: The etcd members and control plane components are colocated (run on the same node/machines)
  • Unstacked/External: With the unstacked or external etcd topology, etcd members have dedicated machines and are not colocated with control plane components

The unstacked etcd topology is recommended for a HA cluster for the following reasons:

  • External etcd topology decouples the control plane components and etcd member. So if a control plane-only node fails, or if there is a memory leak in a component like kube-apiserver, it won’t directly impact an etcd member.
  • Etcd is resource intensive, so it is safer to have dedicated nodes for etcd, since it could use more disk space or higher bandwidth. Having a separate etcd cluster for these reasons could ensure a more resilient HA setup.

EKS Anywhere supports both topologies. In order to configure a cluster with the unstacked/external etcd topology, you need to configure your cluster by updating the configuration file before creating the cluster. This is a generic template with detailed descriptions below for reference:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
   name: my-cluster-name
spec:
   clusterNetwork:
      pods:
         cidrBlocks:
            - 192.168.0.0/16
      services:
         cidrBlocks:
            - 10.96.0.0/12
      cni: "cilium"
   controlPlaneConfiguration:
      count: 1
      endpoint:
         host: ""
      machineGroupRef:
         kind: VSphereMachineConfig
         name: my-cluster-name-cp
   datacenterRef:
      kind: VSphereDatacenterConfig
      name: my-cluster-name
   # etcd configuration
   externalEtcdConfiguration:
      count: 3
      machineGroupRef:
        kind: VSphereMachineConfig
        name: my-cluster-name-etcd
   kubernetesVersion: "1.19"
   workerNodeGroupConfigurations:
      - count: 1
        machineGroupRef:
           kind: VSphereMachineConfig
           name: my-cluster-name

externalEtcdConfiguration (under Cluster)

This field accepts any configuration parameters for running external etcd.

count (required)

This determines the number of etcd members in the cluster. The recommended number is 3.

machineGroupRef (required)

Refers to the Kubernetes object with vsphere specific configuration for your etcd members. See VSphereMachineConfig Fields

5.1.3 - OIDC configuration

EKS Anywhere cluster yaml specification OIDC reference

OIDC support (optional)

EKS Anywhere can create clusters that support api server OIDC authentication. In order to add OIDC support, you need to configure your cluster by updating the configuration file before creating the cluster. This is a generic template with detailed descriptions below for reference:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
   name: my-cluster-name
spec:
   ...
   # OIDC support
   identityProviderRefs:
      - kind: OIDCConfig
        name: my-cluster-name
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: OIDCConfig
metadata:
   name: my-cluster-name
spec:
    clientId: ""
    groupsClaim: ""
    groupsPrefix: ""
    issuerUrl: "https://x"
    requiredClaims:
      - claim: ""
        value: ""
    usernameClaim: ""
    usernamePrefix: ""

identityProviderRefs (Under Cluster)

List of identity providers you want configured for the Cluster. Right now, only 1 provider of type OIDCConfig is supported. This would include a reference to the OIDCConfig object with the configuration below.

clientId (required)

  • Description: ClientId defines the client ID for the OpenID Connect client
  • Type: string

groupsClaim (optional)

  • Description: GroupsClaim defines the name of a custom OpenID Connect claim for specifying user groups
  • Type: string

groupsPrefix (optional)

  • Description: GroupsPrefix defines a string to be prefixed to all groups to prevent conflicts with other authentication strategies
  • Type: string

issuerUrl (required)

  • Description: IssuerUrl defines the URL of the OpenID issuer, only HTTPS scheme will be accepted
  • Type: string

requiredClaims (optional)

List of RequiredClaim objects listed below. Only one is supported at this time.

requiredClaims[0] (optional)

  • Description: RequiredClaim defines a key=value pair that describes a required claim in the ID Token
    • claim
      • type: string
    • value
      • type: string
  • Type: object

usernameClaim (optional)

  • Description: UsernameClaim defines the OpenID claim to use as the user name. Note that claims other than the default (‘sub’) is not guaranteed to be unique and immutable
  • Type: string

usernamePrefix (optional)

  • Description: UsernamePrefix defines a string to be prefixed to all usernames. If not provided, username claims other than ‘email’ are prefixed by the issuer URL to avoid clashes. To skip any prefixing, provide the value ‘-’.
  • Type: string

5.1.4 - GitOpsConfig configuration

Configuration reference for GitOps cluster management.

GitOps Support (Optional)

EKS Anywhere can create clusters that supports GitOps configuration management with Flux. In order to add GitOps support, you need to configure your cluster by updating the configuration file before creating the cluster. Please note that for the GitOps config to work successfully the environment variable EKSA_GITHUB_TOKEN needs to be set with a valid GitHub PAT . This is a generic template with detailed descriptions below for reference:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
  name: my-cluster-name
spec:
  ...
  #GitOps Support
  gitOpsRef:
    name: my-gitops
    kind: GitOpsConfig
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: GitOpsConfig
metadata:
  name: my-gitops
spec:
  flux:
    github:
      personal: true
      repository: myClusterGitopsRepo
      owner: myGithubUsername
      fluxSystemNamespace: ""
      clusterConfigPath: ""

GitOps Configuration Spec Details

flux (required)

  • Description: our supported gitops provider is flux. This is the only supported value.
  • Type: object

Flux Configuration Spec Details

github (required)

  • Description: github is the only currently supported git provider. This defines your github configuration to be used by EKS Anywhere and flux.
  • Type: object

github Configuration Spec Details

repository (required)

  • Description: The name of the repository where we will store your cluster configuration, and sync it to the cluster. If the repository exists, we will clone it from the git provider; if it does not exist, we will create it for you.
  • Type: string

owner (required)

  • Description: The owner of the git repository; either a github username or github organization name. The Personal Access Token used must belong to the owner if this is a personal repository, or have permissions over the organization if this is not a personal repository.
  • Type: string

personal (optional)

  • Description: Is the repository a personal or organization repository? If personal, this value is true; otherwise, false. If using an organizational repository (e.g. personal is false) the owner field will be used as the organization when authenticating to github.com
  • Default: true
  • Type: boolean

clusterConfigPath (optional)

  • Description: The path relative to the root of the git repository where EKS Anywhere will store the cluster configuration files.
  • Default: clusters/$MANAGEMENT_CLUSTER_NAME
  • Type: string

fluxSystemNamespace (optional)

  • Description: Namespace in which to install the gitops components in your cluster.
  • Default: flux-system.
  • Type: string

branch (optional)

  • Description: The branch to use when committing the configuration.
  • Default: main
  • Type: string

5.1.5 - Proxy configuration

EKS Anywhere cluster yaml specification proxy configuration reference

Proxy support (optional)

You can configure EKS Anywhere to use a proxy to connect to the Internet. This is the generic template with proxy configuration for your reference:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
   name: my-cluster-name
spec:
   ...
   proxyConfiguration:
      httpProxy: http-proxy-ip:port
      httpsProxy: https-proxy-ip:port
      noProxy:
      - list of no proxy endpoints

Proxy Configuration Spec Details

proxyConfiguration (required)

  • Description: top level key; required to use proxy.
  • Type: object

httpProxy (required)

  • Description: HTTP proxy to use to connect to the internet; must be in the format IP:port
  • Type: string
  • Example: httpProxy: 192.168.0.1:3218

httpsProxy (required)

  • Description: HTTPS proxy to use to connect to the internet; must be in the format IP:port
  • Type: string
  • Example: httpsProxy: 192.168.0.1:3218

noProxy (optional)

  • Description: list of endpoints that should not be routed through the proxy; can be an IP, CIDR block, or a domain name
  • Type: list of strings
  • Example
  noProxy:
   - localhost
   - 192.168.0.1
   - 192.168.0.0/16
   - .example.com

5.1.6 - Registry Mirror configuration

EKS Anywhere cluster yaml specification for registry mirror configuration

Registry Mirror Support (optional)

You can configure EKS Anywhere to use a private registry as a mirror for pulling the required images.

The following cluster spec shows an example of how to configure registry mirror:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
   name: my-cluster-name
spec:
   ...
  registryMirrorConfiguration:
    endpoint: <private registry IP or hostname>
    caCertContent: |
      -----BEGIN CERTIFICATE-----
      MIIF1DCCA...
      ...
      es6RXmsCj...
      -----END CERTIFICATE-----        

Registry Mirror Configuration Spec Details

registryMirrorConfiguration (required)

  • Description: top level key; required to use a private registry.
  • Type: object

endpoint (required)

  • Description: IP address or hostname of the private registry for pulling images
  • Type: string
  • Example: endpoint: 192.168.0.1

caCertContent (optional)

  • Description: Certificate Authority (CA) Certificate for the private registry . When using self-signed certificates it is necessary to pass this parameter in the cluster spec.
    It is also possible to configure CACertContent by exporting an environment variable:
    export EKSA_REGISTRY_MIRROR_CA="/path/to/certificate-file"
  • Type: string
  • Example:
    CACertContent: |
      -----BEGIN CERTIFICATE-----
      MIIF1DCCA...
      ...
      es6RXmsCj...
      -----END CERTIFICATE-----  
    

Import images into a private registry

You can use the import-images command to pull images from public.ecr.aws and push them to your private registry.

docker login https://<private registry endpoint>
...
eksctl anywhere import-images -f cluster-spec.yaml

Docker configurations

It is necessary to add the private registry’s CA Certificate to the list of CA certificates on the admin machine if your registry uses self-signed certificates.

For Linux , you can place your certificate here: /etc/docker/certs.d/<private-registry-endpoint>/ca.crt

For Mac , you can follow this guide to add the certificate to your keychain: https://docs.docker.com/desktop/mac/#add-tls-certificates

Registry configurations

Depending on what registry you decide to use, you will need to create the following projects:

bottlerocket
eks-anywhere
eks-distro
isovalent

For example, if a registry is available at private-registry.local, then the following projects will have to be created:

https://private-registry.local/bottlerocket
https://private-registry.local/eks-anywhere
https://private-registry.local/eks-distro
https://private-registry.local/isovalent

5.2 - Frequently Asked Questions

Frequently asked questions about EKS Anywhere

AuthN / AuthZ

How do my applications running on EKS Anywhere authenticate with AWS services using IAM credentials?

You can leverage the IAM Role for Service Account (IRSA) feature and follow the special instructions for DIY Kubernetes to enable AWS authentication. This solution includes the following steps at a high level:

  1. Create a key pair for signing, and host your public key somewhere (e.g. S3).

  2. Configure your EKS Anywhere Kubernetes API server so it can issue and mount projected service account tokens in pods.

  3. Create an IAM role defining access to the target AWS services and annotate a service account with said IAM role.

  4. Finally, configure your pods by using the service account created in the previous step and assume the IAM role.

The key pair solution above requires you to set up and maintain your key hosting (e.g. key rotation). A solution that is currently in development will allow your IAM role credentials to be securely injected to the EKS Anywhere cluster and assumed by the pods without any key pair.

Does EKS Anywhere support OIDC (including Azure AD and AD FS)?

Yes, EKS Anywhere can create clusters that support API server OIDC authentication. This means you can federate authentication through AD FS locally or through Azure AD, along with other IDPs that support the OIDC standard. In order to add OIDC support to your EKS Anywhere clusters, you need to configure your cluster by updating the configuration file before creating the cluster. Please see the OIDC reference for details.

Does EKS Anywhere support LDAP?

EKS Anywhere does not support LDAP out of the box. However, you can look into the Dex LDAP Connector .

Can I use AWS IAM for Kubernetes resource access control on EKS Anywhere?

Yes, you can install the aws-iam-authenticator on your EKS Anywhere cluster to achieve this.

Miscellaneous

Can I connect my EKS Anywhere cluster to EKS?

Yes, you can install EKS Connector to connect your EKS Anywhere cluster to AWS EKS. EKS Connector is a software agent that you can install on the EKS Anywhere cluster that enables the cluster to communicate back to AWS. Once connected, you can immediately see the EKS Anywhere cluster with workload and cluster configuration information on the EKS console, alongside your EKS clusters.

How does the EKS Connector authenticate with AWS?

During start-up, the EKS Connector generates and stores an RSA key-pair as Kubernetes secrets. It also registers with AWS using the public key and the activation details from the cluster registration configuration file. The EKS Connector needs AWS credentials to receive commands from AWS and to send the response back. Whenever it requires AWS credentials, it uses its private key to sign the request and invokes AWS APIs to request the credentials.

How does the EKS Connector authenticate with my Kubernetes cluster?

The EKS Connector acts as a proxy and forwards the EKS console requests to the Kubernetes API server on your cluster. In the initial release, the connector uses impersonation with its service account secrets to interact with the API server. Therefore, you need to associate the connector’s service account with a ClusterRole, which gives permission to impersonate AWS IAM entities.

How do I enable an AWS user account to view my connected cluster through the EKS console?

For each AWS user or other IAM identity, you should add cluster role binding to the Kubernetes cluster with the appropriate permission for that IAM identity. Additionally, each of these IAM entities should be associated with the IAM policy to invoke the EKS Connector on the cluster.

Can I use Amazon Controllers for Kubernetes (ACK) on EKS Anywhere?

Yes, you can leverage AWS services from your EKS Anywhere clusters on-premises through Amazon Controllers for Kubernetes (ACK) .

Can I deploy EKS Anywhere on other clouds?

EKS Anywhere can be installed on any infrastructure with the required VMware vSphere versions. See EKS Anywhere vSphere prerequisite documentation.

Do you support multi-cluster operations?

You can perform cluster life cycle and configuration management at scale through GitOps-based tools. EKS Anywhere offers git-driven cluster management through the integrated Flux Controller. See Manage cluster with GitOps documentation for details.

Can I run EKS Anywhere on ESXi?

No. EKS Anywhere is dependent on the vSphere cluster API provider CAPV and it uses the vCenter API. There would need to be a change to the upstream project to support ESXi.

5.3 - eksctl anywhere CLI reference

Details on the options and parameters for eksctl anywhere CLI

The eksctl CLI, with the EKS Anywhere plugin added, lets you create and manage EKS Anywhere clusters. While a cluster is running, most EKS Anywhere administration can be done using kubectl or other native Kubernetes tools.

Use this page as a reference to useful eksctl anywhere command examples for working with EKS Anywhere clusters. Available eksctl anywhere commands include:

  • create cluster To create an EKS Anywhere cluster
  • delete cluster To delete an EKS Anywhere cluster
  • generate [clusterconfig | support-bundle | support-bundle-config] To generate cluster and support configs
  • help To get help information
  • upgrade To upgrade a workload cluster
  • version To get the EKS Anywhere version

Options used with multiple commands include:

  • -h or --help To get help for a command or subcommand
  • -v int or --verbosity int To set log level verbosity from 0-9
  • -f filenameor–filename filename` To identify the filename containing the cluster config
  • --force-cleanup To force deletion of previously created bootstrap cluster
  • -w string or --w-config string To identify the kubeconfig file when needed to create a support bundle or upgrade a cluster

Other available options and arguments are listed with the command examples that follow.

eksctl anywhere generate

With eksctl anywhere generate, you can output sets of cluster resources to create a new cluster or troubleshoot an existing cluster. Here are some examples.

eksctl anywhere generate clusterconfig

Using eksctl anywhere generate clusterconfig you can generate a cluster configuration for a specific provider (-p or --providerprovider_name). Here are examples:

Generate a configuration file to create an EKS Anywhere cluster for a vsphere provider:

export CLUSTER_NAME=vsphere01
eksctl anywhere generate clusterconfig ${CLUSTER_NAME} -p vsphere > ${CLUSTER_NAME}.yaml

Generate a configuration file to create an EKS Anywhere cluster for a Docker provider:

export CLUSTER_NAME=docker01
eksctl anywhere generate clusterconfig ${CLUSTER_NAME} -p docker > ${CLUSTER_NAME}.yaml

Once you have generated the yaml configuration file, edit that file to add configuration information before you use the file to create your cluster. See local and production cluster creation procedures for details.

eksctl anywhere generate support-bundle-config

If you would like to customize your support bundle, you can generate a support bundle configuration file (support-bundle-config), edit that file to choose the data you want to gather, then gather the selected data into a support bundle (support-bundle).

Generate a support bundle config file (then edit that file to select the log data you want to gather):

export CLUSTER_NAME=vsphere01
eksctl anywhere generate support-bundle-config > ${CLUSTER_NAME}_bundle_config.yaml 

eksctl anywhere generate support-bundle

Once you have a bundle config file, generate a support bundle from an existing EKS Anywhere cluster. Additional options available for this command include:

  • --bundle-config string To identify the bundle config file to use to generate the support bundle
  • --since string To collect pod logs in the latest duration like 5s, 2m, or 3h.
  • --since-time string To collect pod logs after a specific datetime(RFC3339) like 2021-06-28T15:04:05Z

Here is an example:

export CLUSTER_NAME=vsphere01
eksctl anywhere generate support-bundle --bundle-config ${CLUSTER_NAME}_bundle_config.yaml \
   -w KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig \
   --since 2h -f ${CLUSTER_NAME}_bundle.yaml

The example just shown:

  • Uses ${CLUSTER_NAME}_bundle.yaml as the file to hold the results
  • Collects pod logs for the past two hours (2h)
  • Identifies the bundle config file to use (${CLUSTER_NAME}_bundle_config.yaml)
  • Identifies the .kubeconfig file to use for a workload cluster

To change the command to generate a support bundle that gathers pod logs starting from a specific date (September 8, 2021) and time (1:27 PM):

export CLUSTER_NAME=vsphere01
eksctl anywhere generate support-bundle --bundle-config ${CLUSTER_NAME}_bundle_config.yaml \
   -w KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig \
   --since-time 2021-09-8T13:27:00Z 2h -f ${CLUSTER_NAME}_bundle.yaml

eksctl anywhere create cluster

Create an EKS Anywhere cluster from a cluster configuration file you generated (and modified) earlier. This example sets verbosity to most verbose (-v 9):

export CLUSTER_NAME=vsphere01
eksctl anywhere create cluster -v 9 -f ${CLUSTER_NAME}.yaml

See local and production cluster creation procedures for details.

eksctl anywhere upgrade cluster

Upgrade an existing EKS Anywhere cluster. This example uses maximum verbosity and forces a cleanup of the previously created bootstrap cluster:

export CLUSTER_NAME=vsphere01
eksctl anywhere upgrade cluster -f ${CLUSTER_NAME}.yaml --force-cleanup -v9 \
   -w KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig 

For more information on this and other ways to upgrade a cluster, see Upgrade cluster .

eksctl anywhere delete cluster

Delete an existing EKS Anywhere cluster. This example deletes all VMs and the forces the deletion of the previously created bootstrap cluster:

export CLUSTER_NAME=vsphere01
eksctl anywhere delete cluster -f ${CLUSTER_NAME}.yaml \
   --force-cleanup \
   -w KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig 

For more information on deleting a cluster, see Delete cluster .

eksctl anywhere version

View the version of eksctl anywhere:

eksctl anywhere version
v0.5.0

eksctl anywhere help

Use eksctl anywhere help or the -h option to see general options or options specific to a particular set of commands.

View general help information using help:

eksctl anywhere help

Use eksctl anywhere to build your own self-managing cluster on your hardware with the best of Amazon EKS

Usage:
  eksctl anywhere [command]

Available Commands:
  create      Create resources
  delete      Delete resources
  generate    Generate resources
  help        Help about any command
  upgrade     Upgrade resources
  version     Get the eksctl version

Flags:
  -h, --help            help for eksctl
  -v, --verbosity int   Set the log level verbosity

Use "eksctl [command] --help" for more information about a command.
...

Display help options for generating a support bundle:

eksctl anywhere generate support-bundle -h

This command is used to create a support bundle to troubleshoot a cluster

Usage:
  eksctl anywhere generate support-bundle -f my-cluster.yaml [flags]

Flags:
      --bundle-config string   Bundle Config file to use when generating support bundle
  -f, --filename string        Filename that contains EKS-A cluster configuration
  -h, --help                   help for support-bundle
      --since string           Collect pod logs in the latest duration like 5s, 2m, or 3h.
      --since-time string      Collect pod logs after a specific datetime(RFC3339) like 2021-06-28T15:04:05Z
  -w, --w-config string        Kubeconfig file to use when creating support bundle for a workload cluster

Global Flags:
  -v, --verbosity int   Set the log level verbosity

Display options for creating a cluster:

eksctl anywhere create cluster -h
This command is used to create workload clusters

Usage:
  eksctl anywhere create cluster [flags]

Flags:
  -f, --filename string   Filename that contains EKS-A cluster configuration
      --force-cleanup     Force deletion of previously created bootstrap cluster
  -h, --help              help for cluster

Global Flags:
  -v, --verbosity int   Set the log level verbosity

5.4 - VMware vSphere

Preparing a VMware vSphere provider for EKS Anywhere

5.4.1 - Requirements for EKS Anywhere on VMware vSphere

Preparing a VMware vSphere provider for EKS Anywhere

To run EKS Anywhere, you will need:

  • A vSphere 7+ environment running vCenter

  • Capacity to deploy 6-10 VMs

  • DHCP service running in vSphere environment in the primary VM network for your workload cluster

  • One network in vSphere to use for the cluster. This network must have inbound access into vCenter

  • A OVA imported into vSphere and converted into template for the workload VMs

  • User credentials to create VMs and attach networks, etc

  • One IP address routable from cluster but excluded from DHCP offering

    This IP address is to be used as the Control Plane Endpoint IP or kube-vip VIP address

    Below are some suggestions to ensure that this IP address is never handed out by your DHCP server.

    You may need to contact your network engineer.

    • Pick an IP address reachable from cluster subnet which is excluded from DHCP range OR
    • Alter DHCP ranges to leave out an IP address(s) at the top and/or the bottom of the range OR
    • Create an IP reservation for this IP on your DHCP server. This is usually accomplished by adding a dummy mapping of this IP address to a non-existent mac address.

Each VM will require:

  • 2 vCPUs
  • 8GB RAM
  • 25GB Disk

The administrative machine and the target workload environment will need network access to:

  • public.ecr.aws
  • anywhere-assets.eks.amazonaws.com (to download the EKS Anywhere binaries, manifests and OVAs)
  • distro.eks.amazonaws.com (to download EKS Distro binaries and manifests)
  • d2glxqk2uabbnd.cloudfront.net (for EKS Anywhere and EKS Distro ECR container images)
  • api.github.com (only if GitOps is enabled)

5.4.2 - User permissions

The permissions needed by the EKS Anywhere vSphere user

The permissions needed by the EKS Anywhere vSphere user are just short of full administrative access. For example:

Virtual machine

Configuration

  • Change configuration
  • Add existing disk
  • Add new disk
  • Add or remove device

Advanced configuration

  • Change CPU count
  • Change memory
  • Change settings
  • Configure raw devices
  • Extend virtual disk
  • Modify device settings
  • Remove disk
  • Create from existing
  • Remove

Interaction

  • Power off
  • Power on

Provisioning

  • Deploy template

Datastore

  • Allocate space
  • List datastore
  • Low level file operations

Network

  • Assign network

Resource Pool

  • Assign a virtual machine to resource pool

5.4.3 - Customize OVAs: Ubuntu

Customizing Imported Ubuntu OVAs

There may be a need to make specific configuration changes on the imported ova template before using it to create/update EKS-A clusters.

Set up SSH Access for Imported OVA

SSH user and key need to be configured in order to allow SSH login to the VM template

Clone template to VM

Create an environment variable to hold the name of modified VM/template

export VM=<vm-name>

Clone the imported OVA template to create VM

govc vm.clone -on=false -vm=<full-path-to-imported-template> - folder=<full-path-to-folder-that-will-contain-the-VM> -ds=<datastore> $VM

Configure VM with cloud-init and the VMX GuestInfo datasource

Create a metadata.yaml file

instance-id: cloud-vm
local-hostname: cloud-vm
network:
  version: 2
  ethernets:
    nics:
      match:
        name: ens*
      dhcp4: yes

Create a userdata.yaml file

#cloud-config

users:
  - default
  - name: <username>
    primary_group: <username>
    sudo: ALL=(ALL) NOPASSWD:ALL
    groups: sudo, wheel
    ssh_import_id: None
    lock_passwd: true
    ssh_authorized_keys:
    - <user's ssh public key>

Export environment variable containing the cloud-init metadata and userdata

export METADATA=$(gzip -c9 <metadata.yaml | { base64 -w0 2>/dev/null || base64; }) \
       USERDATA=$(gzip -c9 <userdata.yaml | { base64 -w0 2>/dev/null || base64; })

Assign metadata and userdata to VM’s guestinfo

govc vm.change -vm "${VM}" \
  -e guestinfo.metadata="${METADATA}" \
  -e guestinfo.metadata.encoding="gzip+base64" \
  -e guestinfo.userdata="${USERDATA}" \
  -e guestinfo.userdata.encoding="gzip+base64"

Power the VM on

govc vm.power -on β€œ$VM”

Customize VM and Convert to Template

Once the VM is powered on and fetches an IP address, ssh into the VM using your private key corresponding to the public key specified in userdata.yaml

ssh -i <private-key-file> username@<VM-IP>

Make desired config changes on the VM

Power the VM down

govc vm.power -off "$VM"

Take a snapshot of the VM (It is highly recommended that you snapshot the VM. This will reduce the time it takes to provision machines and cluster creation will be faster. If you prefer not to take snapshot, skip this step. If you do not snapshot the VM, you will not be able to customize the disk size on your cluster VMs)

govc snapshot.create -vm "$VM" root

Convert VM to template

govc vm.markastemplate $VM

Tag the template appropriately as described here

Use this customized template to create/upgrade EKS Anywhere clusters

5.4.4 - Import OVAs

Importing EKS Anywhere OVAs to vSphere

If you want to specify an OVF template, you will need to import OVA files into vSphere before you can use it in your EKS Anywhere cluster. This guide was written using VMware Cloud on AWS, but the VMWare OVA import guide can be found here .

EKS Anywhere supports the following operating system families

  • Bottlerocket (default)
  • Ubuntu

A list of OVAs for this release can be found on the artifacts page .

Using vCenter Web User Interface

  1. Right click on your Datacenter, select Deploy OVF Template Import ova drop down

  2. Select an OVF template using URL or selecting a local OVF file and click on Next. If you are not able to select an OVF template using URL, download the file and use Local file option.

    Note: If you are using Bottlerocket OVAs, please select local file option. Import ova wizard

  3. Select a folder where you want to deploy your OVF package (most of our OVF templates are under SDDC-Datacenter directory) and click on Next. You cannot have an OVF template with the same name in one directory. For workload VM templates, leave the Kubernetes version in the template name for reference. A workload VM template will support at least one prior Kubernetes major versions. Import ova wizard

  4. Select any compute resource to run (from cluster-1, 10.2.34.5, etc..) the deployed VM and click on Next Import ova wizard

  5. Review the details and click Next.

  6. Accept the agreement and click Next.

  7. Select the appropriate storage (e.g. β€œWorkloadDatastoreβ€œ) and click Next.

  8. Select destination network (e.g. β€œsddc-cgw-network-1”) and click Next.

  9. Finish.

  10. Snapshot the VM. Right click on the imported VM and select Snapshots -> Take Snapshot… (It is highly recommended that you snapshot the VM. This will reduce the time it takes to provision machines and cluster creation will be faster. If you prefer not to take snapshot, skip to step 13) Import ova wizard

  11. Name your template (e.g. “root”) and click Create. Import ova wizard

  12. Snapshots for the imported VM should now show up under the Snapshots tab for the VM. Import ova wizard

  13. Right click on the imported VM and select Template and Convert to Template Import ova wizard

Steps to deploy a template using GOVC (CLI)

To deploy a template using govc, you must first ensure that you have GOVC installed . You need to set and export three environment variables to run govc GOVC_USERNAME, GOVC_PASSWORD and GOVC_URL.

  1. Import the template to a content library in vCenter using URL or selecting a local OVA file

    Using URL:

    govc library.import -k -pull <library name> <URL for the OVA file>
    

    Using a file from the local machine:

    govc library.import <library name> <path to OVA file on local machine>
    
  2. Deploy the template

    govc library.deploy -pool <resource pool> -folder <folder location to deploy template> /<library name>/<template name> <name of new VM>
    

    2a. If using Bottlerocket template, resize disk 2 to 20G

    govc vm.disk.change -vm <template name> -disk.label "Hard disk 2" -size 20G
    
  3. Take a snapshot of the VM (It is highly recommended that you snapshot the VM. This will reduce the time it takes to provision machines and cluster creation will be faster. If you prefer not to take snapshot, skip this step)

    govc snapshot.create -vm ubuntu-2004-kube-v1.19.6 root
    
  4. Mark the new VM as a template

    govc vm.markastemplate <name of new VM>
    

Important Additional Steps to Tag the OVA

Using vCenter UI

Tag to indicate OS family

  1. Select the template that was newly created in the steps above and navigate to Summary -> Tags. Import ova wizard
  2. Click Assign -> Add Tag to create a new tag and attach it Import ova wizard
  3. Name the tag os:ubuntu or os:bottlerocket Import ova wizard

Tag to indicate eksd release

  1. Select the template that was newly created in the steps above and navigate to Summary -> Tags. Import ova wizard
  2. Click Assign -> Add Tag to create a new tag and attach it Import ova wizard
  3. Name the tag eksdRelease:{eksd release for the selected ova}, for example eksdRelease:kubernetes-1-20-eks-5 for the 1.20 ova. You can find the rest of eksd releases in the previous section . If it’s the first time you add an eksdRelease tag, you would need to create the category first. Click on “Create New Category” and name it eksdRelease. Import ova wizard

Using govc

Tag to indicate OS family

  1. Create tag category
govc tags.category.create -t VirtualMachine os
  1. Create tags os:ubuntu and os:bottlerocket
govc tags.create -c os os:bottlerocket
govc tags.create -c os os:ubuntu
  1. Attach newly created tag to the template
govc tags.attach os:bottlerocket <Template Path>
govc tags.attach os:ubuntu <Template Path>
  1. Verify tag is attached to the template
govc tags.ls <Template Path> 

Tag to indicate eksd release

  1. Create tag category
govc tags.category.create -t VirtualMachine eksdRelease
  1. Create the proper eksd release Tag, depending on your template. You can find the eksd releases in the previous section . For example eksdRelease:kubernetes-1-20-eks-5 for the 1.20 template.
govc tags.create -c eksdRelease eksdRelease:kubernetes-1-20-eks-2
  1. Attach newly created tag to the template
govc tags.attach eksdRelease:kubernetes-1-20-eks-2 <Template Path>
  1. Verify tag is attached to the template
govc tags.ls <Template Path> 

After you are done you can use the template for your workload cluster.

5.4.5 -

  • public.ecr.aws
  • anywhere-assets.eks.amazonaws.com (to download the EKS Anywhere binaries, manifests and OVAs)
  • distro.eks.amazonaws.com (to download EKS Distro binaries and manifests)
  • d2glxqk2uabbnd.cloudfront.net (for EKS Anywhere and EKS Distro ECR container images)
  • api.github.com (only if GitOps is enabled)

5.5 - Security best practices

Using security best practices with your EKS Anywhere deployments

If you discover a potential security issue in this project, we ask that you notify AWS/Amazon Security via our vulnerability reporting page . Please do not create a public GitHub issue for security problems.

This guide provides advice about best practices for EKS Anywhere specific security concerns. For a more complete treatment of Kubernetes security generally please refer to the official Kubernetes documentation on Securing a Cluster and the Amazon EKS Best Practices Guide for Security .

The Shared Responsibility Model and EKS-A

AWS Cloud Services follow the Shared Responsibility Model, where AWS is responsible for security β€œof” the cloud, while the customer is responsible for security β€œin” the cloud. However, EKS Anywhere is an open-source tool and the distribution of responsibility differs from that of a managed cloud service like EKS.

AWS Responsibilities

AWS is responsible for building and delivering a secure tool. This tool will provision an initially secure Kubernetes cluster.

AWS is responsible for vetting and securely sourcing the services and tools packaged with EKS Anywhere and the cluster it creates (such as CoreDNS, Cilium, Flux, CAPI, and govc).

The EKS Anywhere build and delivery infrastructure, or supply chain, is secured to the standard of any AWS service and AWS takes responsibility for the secure and reliable delivery of a quality product which provisions a secure and stable Kubernetes cluster. When the eksctl anywhere plugin is executed, EKS Anywhere components are automatically downloaded from AWS. eksctl will then perform checksum verification on the components to ensure their authenticity.

AWS is responsible for the secure development and testing of the EKS Anywhere controller and associated custom resource definitions.

AWS is responsible for the secure development and testing of the EKS Anywhere CLI, and ensuring it handles sensitive data and cluster resources securely.

End user responsibilities

The end user is responsible for the entire EKS Anywhere cluster after it has been provisioned. AWS provides a mechanism to upgrade the cluster in-place, but it is the responsibility of the end user to perform that upgrade using the provided tools. End users are responsible for operating their clusters in accordance with Kubernetes security best practices, and for the ongoing security of the cluster after it has been provisioned. This includes but is not limited to:

  • creation or modification of RBAC roles and bindings
  • creation or modification of namespaces
  • modification of the default container network interface plugin
  • configuration of network ingress and load balancing
  • use and configuration of container storage interfaces
  • the inclusion of add-ons and other services

End users are also responsible for:

  • The hardware and software which make up the infrastructure layer (such as vSphere, ESXi, physical servers, and physical network infrastructure).

  • The ongoing maintenance of the cluster nodes, including the underlying guest operating systems. Additionally, while EKS Anywhere provides a streamlined process for upgrading a cluster to a new Kubernetes version, it is the responsibility of the user to perform the upgrade as necessary.

  • Any applications which run β€œon” the cluster, including their secure operation, least privilege, and use of well-known and vetted container images.

EKS Anywhere Security Best Practices

This section captures EKS Anywhere specific security best practices. Please read this section carefully and follow any guidance to ensure the ongoing security and reliability of your EKS Anywhere cluster.

Critical Namespaces

EKS Anywhere creates and uses resources in several critical namespaces. All of the EKS Anywhere managed namespaces should be treated as sensitive and access should be limited to only the most trusted users and processes. Allowing additional access or modifying the existing RBAC resources could potentially allow a subject to access the namespace and the resources that it contains. This could lead to the exposure of secrets or the failure of your cluster due to modification of critical resources. Here are rules you should follow when dealing with critical namespaces:

  • Avoid creating Roles in these namespaces or providing users access to them with ClusterRoles . For more information about creating limited roles for day-to-day administration and development, please see the official introduction to Role Based Access Control (RBAC) .

  • Do not modify existing Roles in these namespaces, bind existing roles to additional subjects , or create new Roles in the namespace.

  • Do not modify existing ClusterRoles or bind them to additional subjects.

  • Avoid using the cluster-admin role, as it grants permissions over all namespaces.

  • No subjects except for the most trusted administrators should be permitted to perform ANY action in the critical namespaces.

The critical namespaces include:

  • eksa-system
  • capv-system
  • flux-system
  • capi-system
  • capi-webhook-system
  • capi-kubeadm-control-plane-system
  • capi-kubeadm-bootstrap-system
  • cert-manager
  • kube-system (as with any Kubernetes cluster, this namespace is critical to the functioning of your cluster and should be treated with the highest level of sensitivity.)

Secrets

EKS Anywhere stores sensitive information, like the vSphere credentials and GitHub Personal Access Token, in the cluster as native Kubernetes secrets . These secret objects are namespaced, for example in the eksa-system and flux-system namespace, and limiting access to the sensitive namespaces will ensure that these secrets will not be exposed. Additionally, limit access to the underlying node. Access to the node could allow access to the secret content.

EKS Anywhere does not currently support encryption-at-rest for Kubernetes secrets. EKS Anywhere support for Key Management Services (KMS) is planned.

The EKS Anywhere kubeconfig file

eksctl anywhere create cluster creates an EKS Anywhere-based Kubernetes cluster and outputs a kubeconfig file with administrative privileges to the $PWD/$CLUSTER_NAME directory.

By default, this kubeconfig file uses certificate-based authentication and contains the user certificate data for the administrative user.

The kubeconfig file grants administrative privileges over your cluster to the bearer and the certificate key should be treated as you would any other private key or administrative password.

The EKS Anywhere-generated kubeconfig file should only be used for interacting with the cluster via eksctl anywhere commands, such as upgrade, and for the most privileged administrative tasks. For more information about creating limited roles for day-to-day administration and development, please see the official introduction to Role Based Access Control (RBAC) .

GitOps

GitOps enabled EKS Anywhere clusters maintain a copy of their cluster configuration in the user provided Git repository. This configuration acts as the source of truth for the cluster. Changes made to this configuration will be reflected in the cluster configuration.

AWS recommends that you gate any changes to this repository with mandatory pull request reviews. Carefully review pull requests for changes which could impact the availability of the cluster (such as scaling nodes to 0 and deleting the cluster object) or contain secrets.

GitHub Personal Access Token

Treat the GitHub PAT used with EKS Anywhere as you would any highly privileged secret, as it could potentially be used to make changes to your cluster by modifying the contents of the cluster configuration file through the GitHub.com API.

  • Never commit the PAT to a Git repository
  • Never share the PAT via untrusted channels
  • Never grant non-administrative subjects access to the flux-system namespace where the PAT is stored as a native Kubernetes secret.

Executing EKS Anywhere

Ensure that you execute eksctl anywhere create cluster on a trusted workstation in order to protect the values of sensitive environment variables and the EKS Anywhere generated kubeconfig file.

SSH Access to Cluster Nodes and ETCD Nodes

EKS Anywhere provides the option to configure an ssh authorized key for access to underlying nodes in a cluster, via vsphereMachineConfig.Users.sshAuthorizedKeys. This grants the associated private key the ability to connect to the cluster via ssh as the user capv with sudo permissions. The associated private key should be treated as extremely sensitive, as sudo access to the cluster and ETCD nodes can permit access to secret object data and potentially confer arbitrary control over the cluster.

VMWare OVAs

Only download OVAs for cluster nodes from official sources, and do not allow untrusted users or processes to modify the templates used by EKS Anywhere for provisioning nodes.

Benchmark tests for cluster hardening

EKS Anywhere creates clusters with server hardening configurations out of the box, via the use of security flags and opinionated default templates. You can verify the security posture of your EKS Anywhere cluster by using a tool called kube-bench , that checks whether Kubernetes is deployed securely.

kube-bench runs checks documented in the CIS Benchmark for Kubernetes , such as, pod specification file permissions, disabling insecure arguments, and so on.

Refer to the EKS Anywhere CIS Self-Assessment Guide for more information on how to evaluate the security configurations of your EKS Anywhere cluster.

5.5.1 - CIS Self-Assessment Guide

CIS Benchmark Self-Assessment Guide for EKS Anywhere clusters

The CIS Benchmark self-assessment guide serves to help EKS Anywhere users evaluate the level of security of the hardened cluster configuration against Kubernetes benchmark controls from the Center for Information Security (CIS). This guide will walk through the various controls and provide updated example commands to audit compliance in EKS Anywhere clusters.

You can verify the security posture of your EKS Anywhere cluster by using a tool called kube-bench . The ideal way to run the benchmark tests on your EKS Anywhere cluster is to apply the Kube-bench Job YAMLs to the cluster. This runs the kube-bench tests on a Pod on the cluster, and the logs of the Pod provide the test results.

Kube-bench currently does not support unstacked etcd topology (which is the default for EKS Anywhere), so the following checks are skipped in the default kube-bench Job YAML. If you created your EKS Anywhere cluster with stacked etcd configuration, you can apply the stacked etcd Job YAML instead.

Check number Check description
1.1.7 Ensure that the etcd pod specification file permissions are set to 644 or more restrictive
1.1.8 Ensure that the etcd pod specification file ownership is set to root:root
1.1.11 Ensure that the etcd data directory permissions are set to 700 or more restrictive
1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd

The following tests are also skipped, because they are not applicable or enforce settings that might make the cluster unstable.

Check number Check description Reason for skipping
Controlplane node configuration
1.2.6 Ensure that the –kubelet-certificate-authority argument is set as appropriate When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers
1.2.16 Ensure that the admission control plugin PodSecurityPolicy is set Enabling Pod Security Policy can cause applications to unexpectedly fail
1.2.32 Ensure that the –encryption-provider-config argument is set as appropriate Enabling encryption changes how data can be recovered as data is encrypted
1.2.33 Ensure that encryption providers are appropriately configured Enabling encryption changes how data can be recovered as data is encrypted
Worker node configuration
4.2.6 Ensure that the –protect-kernel-defaults argument is set to true System level configurations are required before provisioning the cluster in order for this argument to be set to true
4.2.10 Ensure that the –tls-cert-file and –tls-private-key-file arguments are set as appropriate When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers

5.6 - Support

Support for EKS Anywhere

EKS Anywhere support licenses are available to AWS customers who pay for enterprise support. If you would like business support for your EKS Anywhere clusters please contact your Technical Account Manager (TAM) for details.

EKS Anywhere is an open source project and it is supported by the community. If you have a problem, open an issue and someone will get back to you as soon as possible. If you discover a potential security issue in this project, we ask that you notify AWS/Amazon Security via our vulnerability reporting page . Please do not create a public GitHub issue for security problems.

5.7 - Ports and protocols

Ports used with an EKS Anywhere cluster

EKS Anywhere requires that various ports on control plane and worker nodes be open. Some Kubernetes-specific ports need open access only from other Kubernetes nodes, while others are exposed externally. Beyond Kubernetes ports, someone managing an EKS Anywhere cluster must also have external access to ports on the underlying EKS Anywhere provider (such as VMware) and to external tooling (such as Jenkins).

If you are responsible for network firewall rules between nodes on your EKS Anywhere clusters, the following tables describe both Kubernetes and EKS Anywhere-specific ports you should be aware of.

Kubernetes control plane

The following table represents the ports published by the Kubernetes project that must be accessible on any Kubernetes control plane.

Protocol Direction Port Range Purpose Used By
TCP Inbound 6443 Kubernetes API server All
TCP Inbound 10250 Kubelet API Self, Control plane
TCP Inbound 10259 kube-scheduler Self
TCP Inbound 10257 kube-controller-manager Self

Although etcd ports are included in control plane section, you can also host your own etcd cluster externally or on custom ports.

Protocol Direction Port Range Purpose Used By
TCP Inbound 2379-2380 etcd server client API kube-apiserver, etcd

Use the following to access the SSH service on the control plane and etcd nodes:

Protocol Direction Port Range Purpose Used By
TCP Inbound 22 SSHD server SSH clients

Kubernetes worker nodes

The following table represents the ports published by the Kubernetes project that must be accessible from worker nodes.

Protocol Direction Port Range Purpose Used By
TCP Inbound 10250 Kubelet API Self, Control plane
TCP Inbound 30000-32767 NodePort Services All

The API server port that is sometimes switched to 443. Alternatively, the default port is kept as is and API server is put behind a load balancer that listens on 443 and routes the requests to API server on the default port.

Use the following to access the SSH service on the worker nodes:

Protocol Direction Port Range Purpose Used By
TCP Inbound 22 SSHD server SSH clients

VMware provider

The following table displays ports that need to be accessible from the VMware provider running EKS Anywhere:

Protocol Direction Port Range Purpose Used By
TCP Inbound 443 vCenter Server vCenter API endpoint
TCP Inbound 6443 Kubernetes API server Kubernetes API endpoint
TCP Inbound 2379 Manager Etcd API endpoint
TCP Inbound 2380 Manager Etcd API endpoint

Control plane management tools

A variety of control plane management tools are available to use with EKS Anywhere. One example is Jenkins.

Protocol Direction Port Range Purpose Used By
TCP Inbound 8080 Jenkins Server HTTP Jenkins endpoint
TCP Inbound 8443 Jenkins Server HTTPS Jenkins endpoint

5.8 - Troubleshooting

Troubleshooting reference for your EKS Anywhere Cluster

Read more about troubleshooting in the tasks section.

5.9 - What's New?

v0.6.0 - 2021-10-29

Added

  • Support to create and manage workload clusters #94
  • Support for upgrading eks-anywhere components #93 , [cluster upgrades] Create cluster workflow
  • k8s CIS compliance #193
  • Support bundle improvements #92
  • Ability to upgrade control plane nodes before worker nodes #100
  • Ability to use your own container registry #98
  • Make namespace configurable for anywhere resources #177

Fixed

  • Fix ova auto-import issue for multi-datacenter environments #437
  • OVA import via EKS-A CLI sometimes fails #254
  • Add proxy configuration to etcd nodes for bottlerocket #195

Removed

  • overrideClusterSpecFile field in cluster config

v0.5.0

Added

  • Initial release of EKS-A

5.10 - Artifacts

Artifacts associated with this release: OVAs and container images.

OVAs

Bottlerocket

Bottlerocket vends its VMware variant OVAs using a secure distribution tool called tuftool. Please follow instructions down below to download Bottlerocket OVA.

  1. Install Rust and Cargo
curl https://sh.rustup.rs -sSf | sh
  1. Install tuftool using Cargo
CARGO_NET_GIT_FETCH_WITH_CLI=true cargo install --force tuftool
  1. Download the root role tuftool will use to download the OVA
curl -O "https://cache.bottlerocket.aws/root.json"
sha512sum -c <<<"e9b1ea5f9b4f95c9b55edada4238bf00b12845aa98bdd2d3edb63ff82a03ada19444546337ec6d6806cbf329027cf49f7fde31f54d551c5e02acbed7efe75785  root.json"
  1. Export the desired Kubernetes Version. EKS Anywhere currently supports 1.21 and 1.20
export KUBEVERSION="1.21"
  1. Download the OVA
OVA="bottlerocket-vmware-k8s-${KUBEVERSION}-x86_64-v1.3.0.ova"
tuftool download . --target-name "${OVA}" \
   --root ./root.json \
   --metadata-url "https://updates.bottlerocket.aws/2020-07-07/vmware-k8s-${KUBEVERSION}/x86_64/" \
   --targets-url "https://updates.bottlerocket.aws/targets/"

Bottlerocket Tags

OS Family - os:bottlerocket

EKS-D Release

1.21 - eksdRelease:kubernetes-1-21-eks-6

1.20 - eksdRelease:kubernetes-1-20-eks-8

Ubuntu with Kubernetes 1.21

Ubuntu with Kubernetes 1.20

Building your own Ubuntu OVA

The EKS Anywhere project OVA building process leverages upstream image-builder repository. If you want to build an OVA with a custom Ubuntu base image to use for an EKS Anywhere cluster, please follow the instructions below.

Having access to a vSphere environment and docker running locally are prerequisites for building your own images.

Required vSphere Permissions

Virtual machine

Inventory:

  • Create new

Configuration:

  • Change configuration
  • Add new disk
  • Add or remove device
  • Change memory
  • Change settings
  • Set annotation

Interaction:

  • Power on
  • Power off
  • Console interaction
  • Configure CD media
  • Device connection

Snapshot management:

  • Create snapshot

Provisioning

  • Mark as template

Resource Pool

  • Assign vm to resource pool

Datastore

  • Allocate space
  • Browse data
  • Low level file operations

Network

  • Assign network to vm

Steps to build an OVA

  1. Spin up a builder-base docker container and exec into it. Please use the most recent tag for the image on its repository here
docker exec -it public.ecr.aws/eks-distro-build-tooling/builder-base:latest bash
  1. Clone the eks-anywhere-build-tooling repo.
git clone https://github.com/aws/eks-anywhere-build-tooling.git
  1. Navigate to the image-builder directory.
cd eks-anywhere-build-tooling/projects/kubernetes-sigs/image-builder
  1. Get the vSphere connection details and create a json file named vsphere.json with the following template.
{
    "cluster": "<vSphere cluster name>",
    "datacenter": "<datacenter name on vSphere>",
    "datastore": "<datastore to be used on vSphere>",
    "folder": "<folder path to use for building ova>",
    "network": "<dhcp enabled network name>",
    "resource_pool": "<vSphere resource pool to use>",
    "vcenter_server": "<vSphere server URL>",
    "username": "<vSphere username>",
    "password": "<vSphere password>",
    "template": "",
    "insecure_connection": "false",
    "linked_clone": "false",
    "convert_to_template": "false",
    "create_snapshot": "true"
}

  1. Export the vSphere connection data file, escaping all the quotes
export VSPHERE_CONNECTION_DATA=\"$(cat vsphere.json | jq -c . | sed 's/"/\\"/g')\"
  1. Download the most recent release bundle manifest and get the latest URLs for etcdadm and crictl for the intended Kubernetes version.
wget https://anywhere-assets.eks.amazonaws.com/bundle-release.yaml
  1. Export the CRICTL_URL and ETCADM_HTTP_SOURCE environment variables with the URLs from previous step.
export CRICTL_URL=<crictl url>
export ETCDADM_HTTP_SOURCE=<etcdadm url>
  1. Create a library on vSphere for image-builder.
govc library.create "CodeBuild"
  1. Update the Ubuntu configuration file with the new custom ISO URL and its checksum at image-builder/images/capi/packer/ova/ubuntu-2004.json
  2. Setup image-builder and run the OVA build for the Kubernetes version.
make release-ova-ubuntu-2004-1-21

Images

The various images for EKS Anywhere can be found in the EKS Anywhere ECR repository . The various images for EKS Distro can be found in the EKS Distro ECR repository .

6 - Community

Guidelines for community contribution

We work hard to provide a high-quality Kubernetes installer for EKS, and we greatly value feedback and contributions from our community. Please review the contribution guidelines before submitting any issues or pull requests to ensure we have all the necessary information to respond to your bug report or contribution effectively. If you have a concern with a security vulnerability, please review our reporting a vulnerability policy .

6.1 - Contributing Guidelines

How to best contribute to the project

Thank you for your interest in contributing to our project. Whether it’s a bug report, new feature, correction, or additional documentation, we greatly value feedback and contributions from our community.

Please read through this document before submitting any issues or pull requests to ensure we have all the necessary information to effectively respond to your bug report or contribution.

General Guidelines

Pull Requests

Make sure to keep Pull Requests small and functional to make them easier to review, understand, and look up in commit history. This repository uses “Squash and Commit” to keep our history clean and make it easier to revert changes based on PR.

Adding the appropriate documentation, unit tests and e2e tests as part of a feature is the responsibility of the feature owner, whether it is done in the same Pull Request or not.

Pull Requests should follow the “subject: message” format, where the subject describes what part of the code is being modified.

Refer to the template for more information on what goes into a PR description.

Design Docs

A contributor proposes a design with a PR on the repository to allow for revisions and discussions. If a design needs to be discussed before formulating a document for it, make use of GitHub Discussions to involve the community on the discussion.

GitHub Discussions

GitHub Discussions are used for feature requests (that don’t have actionable items/issues), questions, and anything else the community would like to share.

Categories:

  • Q/A - Questions
  • Proposals - Feature requests and other suggestions
  • Show and tell - Anything that the community would like to share
  • General - Everything else (possibly announcements as well)

GitHub Issues

GitHub Issues are used to file bugs, work items, and feature requests with actionable items/issues (Please refer to the “Reporting Bugs/Feature Requests” section below for more information).

Labels:

  • “<area>” - area of project that issue is related to (create, upgrade, flux, test, etc.)
  • “priority/p<n>” - priority of task based on following numbers
    • p0: need to do right away
    • p1: don’t have a set time but need to do
    • p2: not currently being tracked (backlog)
  • “status/<status>” - status of the issue (notstarted, implementation, etc.)
  • “kind/<kind>” - type of issue (bug, feature, enhancement, docs, etc.)

Refer to the template for more information on what goes into an issue description.

GitHub Milestones

GitHub Milestones are used to plan work that is currently being tracked.

  • next: changes for next release
  • next+1: won’t make next release but the following
  • techdebt: used to keep track of techdebt items, separate ongoing effort from release action items
  • oncall: used to keep track of issues needing active follow-up
  • backlog: items that don’t have a home in the others

GitHub Projects (or tasks within a GitHub Issue)

GitHub Projects are used to keep track of bigger features that are made up of a collection of issues. Certain features can also have a tracking issue that contains a checklist of tasks that link to other issues.

Reporting Bugs/Feature Requests

We welcome you to use the GitHub issue tracker to report bugs or suggest features that have actionable items/issues (as opposed to introducing a feature request on GitHub Discussions).

When filing an issue, please check existing open, or recently closed, issues to make sure somebody else hasn’t already reported the issue. Please try to include as much information as you can. Details like these are incredibly useful:

  • A reproducible test case or series of steps
  • The version of the code being used
  • Any modifications you’ve made relevant to the bug
  • Anything unusual about your environment or deployment

Contributing via Pull Requests

Contributions via pull requests are much appreciated. Before sending us a pull request, please ensure that:

  1. You are working against the latest source on the main branch.
  2. You check existing open, and recently merged, pull requests to make sure someone else hasn’t addressed the problem already.
  3. You open an issue to discuss any significant work - we would hate for your time to be wasted.

To send us a pull request, please:

  1. Fork the repository.
  2. Modify the source; please focus on the specific change you are contributing. If you also reformat all the code, it will be hard for us to focus on your change.
  3. Ensure local tests pass.
  4. Commit to your fork using clear commit messages.
  5. Send us a pull request, answering any default questions in the pull request interface.
  6. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation.

GitHub provides additional document on forking a repository and creating a pull request .

Finding contributions to work on

Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels (enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any ‘help wanted’ and ‘good first issue’ issues are a great place to start.

Code of Conduct

This project has adopted the Amazon Open Source Code of Conduct . For more information see the Code of Conduct FAQ or contact opensource-codeofconduct@amazon.com with any additional questions or comments.

Security issue notifications

If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our vulnerability reporting page . Please do not create a public GitHub issue.

Licensing

See the LICENSE file for our project’s licensing. We will ask you to confirm the licensing of your contribution.

6.2 - Code of Conduct

Details on the project code of conduct

This project has adopted the Amazon Open Source Code of Conduct . For more information, see the Code of Conduct FAQ or contact opensource-codeofconduct@amazon.com with any additional questions or comments.

6.3 - Project governance

Roles and responsibilities of the project

This document lays out the guidelines under which the EKS Anywhere project will be governed. The goal is to make sure that the roles and responsibilities are well-defined and clarify how decisions are made.

Roles

In the context of EKS Anywhere, we consider the following roles:

  • Users … everyone using EKS Anywhere, typically willing to provide feedback on EKS Anywhere by proposing features and/or filing issues.
  • Contributors … everyone contributing code, documentation, examples, testing infra, and participating in feature proposals as well as design discussions.
  • Maintainers … are responsible for engaging with and assisting contributors to iterate on the contributions until it reaches acceptable quality. Maintainers can decide whether the contributions can be accepted into the project or rejected.

Communication

The primary mechanism for communication will be via the #eks channel on the Kubernetes Slack community. All features and bug fixes will be tracked as issues in GitHub. All decisions will be documented in GitHub issues.

In the future, we may consider using a public mailing list, which can be better archived.

Release Management

The release process will be governed by AWS and will coincide with the release of EKS.

Roadmap Planning

Maintainers will share roadmap and release versions as milestones in GitHub.