This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Optional Configuration

Optional Config references for EKS Anywhere clusters such as etcd, OS, CNI, IRSA, proxy, and registry mirror

The configuration pages below describe optional features that you can add to your EKS Anywhere provider’s clusterspec file. See each provider’s installation section for details on which optional features are supported.

1 - etcd

EKS Anywhere cluster yaml etcd specification reference

Provider support details

vSphere Bare Metal Nutanix CloudStack Snow
Supported?

There are two types of etcd topologies for configuring a Kubernetes cluster:

  • Stacked: The etcd members and control plane components are colocated (run on the same node/machines)
  • Unstacked/External: With the unstacked or external etcd topology, etcd members have dedicated machines and are not colocated with control plane components

The unstacked etcd topology is recommended for a HA cluster for the following reasons:

  • External etcd topology decouples the control plane components and etcd member. For example, if a control plane-only node fails, or if there is a memory leak in a component like kube-apiserver, it won’t directly impact an etcd member.
  • Etcd is resource intensive, so it is safer to have dedicated nodes for etcd, since it could use more disk space or higher bandwidth. Having a separate etcd cluster for these reasons could ensure a more resilient HA setup.

EKS Anywhere supports both topologies. In order to configure a cluster with the unstacked/external etcd topology, you need to configure your cluster by updating the configuration file before creating the cluster. This is a generic template with detailed descriptions below for reference:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
   name: my-cluster-name
spec:
   clusterNetwork:
      pods:
         cidrBlocks:
            - 192.168.0.0/16
      services:
         cidrBlocks:
            - 10.96.0.0/12
      cniConfig:
         cilium: {}
   controlPlaneConfiguration:
      count: 1
      endpoint:
         host: ""
      machineGroupRef:
         kind: VSphereMachineConfig
         name: my-cluster-name-cp
   datacenterRef:
      kind: VSphereDatacenterConfig
      name: my-cluster-name
   # etcd configuration
   externalEtcdConfiguration:
      count: 3
      machineGroupRef:
        kind: VSphereMachineConfig
        name: my-cluster-name-etcd
   kubernetesVersion: "1.31"
   workerNodeGroupConfigurations:
      - count: 1
        machineGroupRef:
           kind: VSphereMachineConfig
           name: my-cluster-name
        name: md-0

externalEtcdConfiguration (under Cluster)

External etcd configuration for your Kubernetes cluster.

count (required)

This determines the number of etcd members in the cluster. The recommended number is 3.

machineGroupRef (required)

Refers to the Kubernetes object with provider specific configuration for your nodes.

2 - Encrypting Confidential Data at Rest

EKS Anywhere cluster specification for encryption of etcd data at-rest

You can configure EKS Anywhere clusters to encrypt confidential API resource data, such as secrets, at-rest in etcd using a KMS encryption provider. EKS Anywhere supports a hybrid model for configuring etcd encryption where cluster admins are responsible for deploying and maintaining the KMS provider on the cluster and EKS Anywhere handles configuring kube-apiserver with the KMS properties.

Because of this model, etcd encryption can only be enabled on cluster upgrades after the KMS provider has been deployed on the cluster.

Before you begin

Before enabling etcd encryption, make sure you have done the following:

Example etcd encryption configuration

The following cluster spec enables etcd encryption configuration:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
  name: my-cluster
  namespace: default
spec:
  ...
  etcdEncryption:
  - providers:
    - kms:
        cachesize: 1000
        name: example-kms-config
        socketListenAddress: unix:///var/run/kmsplugin/socket.sock
        timeout: 3s
    resources:
    - secrets

Description of etcd encryption fields

etcdEncryption

Key used to specify etcd encryption configuration for a cluster. This field is only supported on cluster upgrades.

  • providers

    Key used to specify which encryption provider to use. Currently, only one provider can be configured.

    • kms

      Key used to configure KMS encryption provider.

      • name

        Key used to set the name of the KMS plugin. This cannot be changed once set.

      • endpoint

        Key used to specify the listen address of the gRPC server (KMS plugin). The endpoint is a UNIX domain socket.

      • cachesize

        Number of data encryption keys (DEKs) to be cached in the clear. When cached, DEKs can be used without another call to the KMS; whereas DEKs that are not cached require a call to the KMS to unwrap. If cachesize isn’t specified, a default of 1000 is used.

      • timeout

        How long should kube-apiserver wait for kms-plugin to respond before returning an error. If a timeout isn’t specified, a default timeout of 3s is used.

  • resources

    Key used to specify a list of resources that should be encrypted using the corresponding encryption provider. These can be native Kubernetes resources such as secrets and configmaps or custom resource definitions such as clusters.anywhere.eks.amazonaws.com.

Example AWS Encryption Provider DaemonSet

Here’s a sample AWS encryption provider daemonset configuration.

Expand
apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    app: aws-encryption-provider
  name: aws-encryption-provider
  namespace: kube-system
spec:
  selector:
    matchLabels:
      app: aws-encryption-provider
  template:
    metadata:
      labels:
        app: aws-encryption-provider
    spec:
      containers:
      - image: <AWS_ENCRYPTION_PROVIDER_IMAGE>    # Specify the AWS KMS encryption provider image 
        name: aws-encryption-provider
        resources:
          requests:
            memory: "64Mi"
            cpu: "250m"
          limits:
            memory: "128Mi"
            cpu: "500m"
        command:
        - /aws-encryption-provider
        - --key=<KEY_ARN>                         # Specify the arn of KMS key to be used for encryption/decryption
        - --region=<AWS_REGION>                   # Specify the region in which the KMS key exists
        - --listen=<KMS_SOCKET_LISTEN_ADDRESS>    # Specify a socket listen address for the KMS provider. Example: /var/run/kmsplugin/socket.sock
        ports:
        - containerPort: 8080
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
        volumeMounts:
          - mountPath: /var/run/kmsplugin
            name: var-run-kmsplugin
          - mountPath: /root/.aws
            name: aws-credentials
      tolerations:
      - key: "node-role.kubernetes.io/master"
        effect: "NoSchedule"
      - key: "node-role.kubernetes.io/control-plane"
        effect: "NoSchedule"
      volumes:
      - hostPath:
          path: /var/run/kmsplugin
          type: DirectoryOrCreate
        name: var-run-kmsplugin
      - hostPath:
          path: /etc/kubernetes/aws
          type: DirectoryOrCreate
        name: aws-credentials

3 - Operating system

EKS Anywhere cluster yaml specification for host OS configuration

Host OS Configuration

You can configure certain host OS settings through EKS Anywhere.

Provider support details

vSphere Bare Metal Nutanix CloudStack Snow
Supported?

The following cluster spec shows an example of how to configure host OS settings:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: VSphereMachineConfig        # Replace "VSphereMachineConfig" with "TinkerbellMachineConfig" for Tinkerbell clusters
metadata:
  name: machine-config
spec:
  ...
  hostOSConfiguration:
    ntpConfiguration:
      servers:
        - time-a.ntp.local
        - time-b.ntp.local
    certBundles:
    - name: "bundle_1"
      data: |
        -----BEGIN CERTIFICATE-----
        MIIF1DCCA...
        ...
        es6RXmsCj...
        -----END CERTIFICATE-----

        -----BEGIN CERTIFICATE-----
        ...
        -----END CERTIFICATE-----        
    bottlerocketConfiguration:
      kubernetes:
        allowedUnsafeSysctls:
          - "net.core.somaxconn"
          - "net.ipv4.ip_local_port_range"
        clusterDNSIPs:
          - 10.96.0.10
        maxPods: 100
      kernel:
        sysctlSettings:
          net.core.wmem_max: "8388608"
          net.core.rmem_max: "8388608"
          ...
      boot:
        bootKernelParameters:
          slub_debug:
          - "options,slabs"
          ...

Host OS Configuration Spec Details

hostOSConfiguration

Top level object used for host OS configurations.

  • ntpConfiguration

    Key used for configuring NTP servers on your EKS Anywhere cluster nodes.

    • servers
      Servers is a list of NTP servers that should be configured on EKS Anywhere cluster nodes.
  • certBundles

    Key used for configuring custom trusted CA certs on your EKS Anywhere cluster nodes. Multiple cert bundles can be configured.

    • name

    Name of the cert bundle that should be configured on EKS Anywhere cluster nodes. This must be a unique name for each entry

    • data

    Data of the cert bundle that should be configured on EKS Anywhere cluster nodes. This takes in a PEM formatted cert bundle and can contain more than one CA cert per entry.


  • bottlerocketConfiguration

    Key used for configuring Bottlerocket-specific settings on EKS Anywhere cluster nodes. These settings are only valid for Bottlerocket.

    • kubernetes - DEPRECATED

      Key used for configuring Bottlerocket Kubernetes settings.

      • allowedUnsafeSysctls

        List of unsafe sysctls that should be enabled on the node.

      • clusterDNSIPs

        List of IPs of DNS service(s) running in the kubernetes cluster.

      • maxPods

        Maximum number of pods that can be scheduled on each node.

    • kernel

      Key used for configuring Bottlerocket Kernel settings.

      • sysctlSettings
        Map of kernel sysctl settings that should be enabled on the node.
    • boot

      Key used for configuring Bottlerocket Boot settings.

      • bootKernelParameters
        Map of Boot Kernel parameters Bottlerocket should configure.

4 - Container Networking Interface

EKS Anywhere cluster yaml cni plugin specification reference

Specifying CNI Plugin in EKS Anywhere cluster spec

Provider support details

vSphere Bare Metal Nutanix CloudStack Snow
Supported?

EKS Anywhere currently supports two CNI plugins: Cilium and Kindnet. Only one of them can be selected for a cluster, and the plugin cannot be changed once the cluster is created. Up until the 0.7.x releases, the plugin had to be specified using the cni field on cluster spec. Starting with release 0.8, the plugin should be specified using the new cniConfig field as follows:

  • For selecting Cilium as the CNI plugin:

    apiVersion: anywhere.eks.amazonaws.com/v1alpha1
    kind: Cluster
    metadata:
      name: my-cluster-name
    spec:
      clusterNetwork:
        pods:
          cidrBlocks:
          - 192.168.0.0/16
        services:
          cidrBlocks:
          - 10.96.0.0/12
        cniConfig:
          cilium: {}
    

    EKS Anywhere selects this as the default plugin when generating a cluster config.

  • Or for selecting Kindnetd as the CNI plugin:

    apiVersion: anywhere.eks.amazonaws.com/v1alpha1
    kind: Cluster
    metadata:
      name: my-cluster-name
    spec:
      clusterNetwork:
        pods:
          cidrBlocks:
          - 192.168.0.0/16
        services:
          cidrBlocks:
          - 10.96.0.0/12
        cniConfig:
          kindnetd: {}
    

NOTE: EKS Anywhere allows specifying only 1 plugin for a cluster and does not allow switching the plugins after the cluster is created.

Policy Configuration options for Cilium plugin

Cilium accepts policy enforcement modes from the users to determine the allowed traffic between pods. The allowed values for this mode are: default, always and never. Please refer the official Cilium documentation for more details on how each mode affects the communication within the cluster and choose a mode accordingly. You can choose to not set this field so that cilium will be launched with the default mode. Starting release 0.8, Cilium’s policy enforcement mode can be set through the cluster spec as follows:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
  name: my-cluster-name
spec:
  clusterNetwork:
    pods:
      cidrBlocks:
      - 192.168.0.0/16
    services:
      cidrBlocks:
      - 10.96.0.0/12
    cniConfig:
      cilium:
        policyEnforcementMode: "always"

Please note that if the always mode is selected, all communication between pods is blocked unless NetworkPolicy objects allowing communication are created. In order to ensure that the cluster gets created successfully, EKS Anywhere will create the required NetworkPolicy objects for all its core components. But it is up to the user to create the NetworkPolicy objects needed for the user workloads once the cluster is created.

Network policies created by EKS Anywhere for “always” mode

As mentioned above, if Cilium is configured with policyEnforcementMode set to always, EKS Anywhere creates NetworkPolicy objects to enable communication between its core components. EKS Anywhere will create NetworkPolicy resources in the following namespaces allowing all ingress/egress traffic by default:

  • kube-system
  • eksa-system
  • All core Cluster API namespaces:
    • capi-system
    • capi-kubeadm-bootstrap-system
    • capi-kubeadm-control-plane-system
    • etcdadm-bootstrap-provider-system
    • etcdadm-controller-system
    • cert-manager
  • Infrastructure provider’s namespace (for instance, capd-system OR capv-system)
  • If Gitops is enabled, then the gitops namespace (flux-system by default)

This is the NetworkPolicy that will be created in these namespaces for the cluster:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-all-ingress-egress
  namespace: test
spec:
  podSelector: {}
  ingress:
  - {}
  egress:
  - {}
  policyTypes:
  - Ingress
  - Egress

Switching the Cilium policy enforcement mode

The policy enforcement mode for Cilium can be changed as a part of cluster upgrade through the cli upgrade command.

  1. Switching to always mode: When switching from default/never to always mode, EKS Anywhere will create the required NetworkPolicy objects for its core components (listed above). This will ensure that the cluster gets upgraded successfully. But it is up to the user to create the NetworkPolicy objects required for the user workloads.

  2. Switching from always mode: When switching from always to default mode, EKS Anywhere will not delete any of the existing NetworkPolicy objects, including the ones required for EKS Anywhere components (listed above). The user must delete NetworkPolicy objects as needed.

EgressMasqueradeInterfaces option for Cilium plugin

Cilium accepts the EgressMasqueradeInterfaces option from users to limit which interfaces masquerading is performed on. The allowed values for this mode are an interface name such as eth0 or an interface prefix such as eth+. Please refer to the official Cilium documentation for more details on how this option affects masquerading traffic.

By default, masquerading will be performed on all traffic leaving on a non-Cilium network device. This only has an effect on traffic egressing from a node to an external destination not part of the cluster and does not affect routing decisions.

This field can be set as follows:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
  name: my-cluster-name
spec:
  clusterNetwork:
    pods:
      cidrBlocks:
      - 192.168.0.0/16
    services:
      cidrBlocks:
      - 10.96.0.0/12
    cniConfig:
      cilium:
        egressMasqueradeInterfaces: "eth0"

RoutingMode option for Cilium plugin

By default all traffic is sent by Cilium over Geneve tunneling on the network. The routingMode option allows users to switch to native routing instead.

This field can be set as follows:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
  name: my-cluster-name
spec:
  clusterNetwork:
    pods:
      cidrBlocks:
      - 192.168.0.0/16
    services:
      cidrBlocks:
      - 10.96.0.0/12
    cniConfig:
      cilium:
        routingMode: "direct"

Use a custom CNI

EKS Anywhere can be configured to skip EKS Anywhere’s default Cilium CNI upgrades via the skipUpgrade field. skipUpgrade can be true or false. When not set, it defaults to false.

When creating a new cluster with skipUpgrade enabled, EKS Anywhere Cilium will be installed as it is required to successfully provision an EKS Anywhere cluster. When the cluster successfully provisions, EKS Anywhere Cilium may be uninstalled and replaced with a different CNI. Subsequent upgrades to the cluster will not attempt to upgrade or re-install EKS Anywhere Cilium.

Once enabled, skipUpgrade cannot be disabled.

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
  name: my-cluster-name
spec:
  clusterNetwork:
    pods:
      cidrBlocks:
      - 192.168.0.0/16
    services:
      cidrBlocks:
      - 10.96.0.0/12
    cniConfig:
      cilium:
        skipUpgrade: true

The Cilium CLI can be used to uninstall EKS Anywhere Cilium via cilium uninstall. See the replacing Cilium task for a walkthrough on how to successfully replace EKS Anywhere Cilium.

Node IPs configuration option

Starting with release v0.10, the node-cidr-mask-size flag for Kubernetes controller manager (kube-controller-manager) is configurable via the EKS anywhere cluster spec. The clusterNetwork.nodes being an optional field, is not generated in the EKS Anywhere spec using generate clusterconfig command. This block for nodes will need to be manually added to the cluster spec under the clusterNetwork section:

  clusterNetwork:
    pods:
      cidrBlocks:
      - 192.168.0.0/16
    services:
      cidrBlocks:
      - 10.96.0.0/12
    cniConfig:
      cilium: {}
    nodes:
      cidrMaskSize: 24

If the user does not specify the clusterNetwork.nodes field in the cluster yaml spec, the value for this flag defaults to 24 for IPv4. Please note that this mask size needs to be greater than the pods CIDR mask size. In the above spec, the pod CIDR mask size is 16 and the node CIDR mask size is 24. This ensures the cluster 256 blocks of /24 networks. For example, node1 will get 192.168.0.0/24, node2 will get 192.168.1.0/24, node3 will get 192.168.2.0/24 and so on.

To support more than 256 nodes, the cluster CIDR block needs to be large, and the node CIDR mask size needs to be small, to support that many IPs. For instance, to support 1024 nodes, a user can do any of the following things

  • Set the pods cidr blocks to 192.168.0.0/16 and node cidr mask size to 26
  • Set the pods cidr blocks to 192.168.0.0/15 and node cidr mask size to 25

Please note that the node-cidr-mask-size needs to be large enough to accommodate the number of pods you want to run on each node. A size of 24 will give enough IP addresses for about 250 pods per node, however a size of 26 will only give you about 60 IPs. This is an immutable field, and the value can’t be updated once the cluster has been created.

5 - IAM Roles for Service Accounts configuration

EKS Anywhere cluster spec for IAM Roles for Service Accounts (IRSA)

IAM Role for Service Account on EKS Anywhere clusters with self-hosted signing keys

IAM Roles for Service Account (IRSA) enables applications running in clusters to authenticate with AWS services using IAM roles. The current solution for leveraging this in EKS Anywhere involves creating your own OIDC provider for the cluster, and hosting your cluster’s public service account signing key. The public keys along with the OIDC discovery document should be hosted somewhere that AWS STS can discover it.

The steps below are based on the guide for configuring IRSA for DIY Kubernetes, with modifications specific to EKS Anywhere’s cluster provisioning workflow. The main modification is the process of generating the keys.json document. As per the original guide, the user has to create the service account signing keys, and then use that to create the keys.json document prior to cluster creation. This order is reversed for EKS Anywhere clusters, so you will create the cluster first, and then retrieve the service account signing key generated by the cluster, and use it to create the keys.json document. The sections below show how to do this in detail.

Create an OIDC provider and make its discovery document publicly accessible

You must use a single OIDC provider per EKS Anywhere cluster, which is the best practice to prevent a token from one cluster being used with another cluster. These steps describe the process of using a S3 bucket to host the OIDC discovery.json and keys.json documents.

  1. Create an S3 bucket to host the public signing keys and OIDC discovery document for your cluster . Make a note of the $HOSTNAME and $ISSUER_HOSTPATH.

  2. Create the OIDC discovery document as follows:

    cat <<EOF > discovery.json
    {
        "issuer": "https://$ISSUER_HOSTPATH",
        "jwks_uri": "https://$ISSUER_HOSTPATH/keys.json",
        "authorization_endpoint": "urn:kubernetes:programmatic_authorization",
        "response_types_supported": [
            "id_token"
        ],
        "subject_types_supported": [
            "public"
        ],
        "id_token_signing_alg_values_supported": [
            "RS256"
        ],
        "claims_supported": [
            "sub",
            "iss"
        ]
    }
    EOF
    
  3. Upload the discovery.json file to the S3 bucket:

    aws s3 cp ./discovery.json s3://$S3_BUCKET/.well-known/openid-configuration
    
  4. Create an OIDC provider for your cluster. Set the Provider URL to https://$ISSUER_HOSTPATH and Audience to sts.amazonaws.com.

  5. Make a note of the Provider field of OIDC provider after it is created.

  6. Assign an IAM role to the OIDC provider.

    1. Navigate to the AWS IAM Console.

    2. Click on the OIDC provider.

    3. Click Assign role.

    4. Select Create a new role.

    5. Select Web identity as the trusted entity.

    6. In the Web identity section:

      • If your Identity provider is not auto selected, select it.
      • Select sts.amazonaws.com as the Audience.
    7. Click Next.

    8. Configure your desired Permissions poilicies.

    9. Below is a sample trust policy of IAM role for your pods. Replace ACCOUNT_ID, ISSUER_HOSTPATH, NAMESPACE and SERVICE_ACCOUNT. Example: Scoped to a service account

      {
          "Version": "2012-10-17",
          "Statement": [
              {
                  "Effect": "Allow",
                  "Principal": {
                      "Federated": "arn:aws:iam::ACCOUNT_ID:oidc-provider/ISSUER_HOSTPATH"
                  },
                  "Action": "sts:AssumeRoleWithWebIdentity",
                  "Condition": {
                      "StringEquals": {
                          "ISSUER_HOSTPATH:sub": "system:serviceaccount:NAMESPACE:SERVICE_ACCOUNT"
                      },
                  }
              }
          ]
      }
      
    10. Create the IAM Role and make a note of the Role name.

    11. After the cluster is created you can grant service accounts access to the role by modifying the trust relationship. See the How to use trust policies with IAM Roles for more information on trust policies. Refer to Configure the trust relationship for the OIDC provider’s IAM Role for a working example.

Create (or upgrade) the EKS Anywhere cluster

When creating (or upgrading) the EKS Anywhere cluster, you need to configure the kube-apiserver’s service-account-issuer flag so it can issue and mount projected service account tokens in pods. For this, use the value obtained in the first section for $ISSUER_HOSTPATH as the service-account-issuer. Configure the kube-apiserver by setting this value through the EKS Anywhere cluster spec:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
    name: my-cluster-name
spec:
    podIamConfig:
        serviceAccountIssuer: https://$ISSUER_HOSTPATH

Set the remaining fields in cluster spec as required and create the cluster.

Generate keys.json and make it publicly accessible

  1. The cluster provisioning workflow generates a pair of service account signing keys. Retrieve the public signing key from the cluster and create a keys.json document with the content.

    git clone https://github.com/aws/amazon-eks-pod-identity-webhook
    cd amazon-eks-pod-identity-webhook
    kubectl get secret ${CLUSTER_NAME}-sa -n eksa-system -o jsonpath={.data.tls\\.crt} | base64 --decode > ${CLUSTER_NAME}-sa.pub
    go run ./hack/self-hosted/main.go -key ${CLUSTER_NAME}-sa.pub | jq '.keys += [.keys[0]] | .keys[1].kid = ""' > keys.json
    
  2. Upload the keys.json document to the S3 bucket.

    aws s3 cp ./keys.json s3://$S3_BUCKET/keys.json
    
  3. Use a bucket policy to grant public read access to the discovery.json and keys.json documents:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": "*",
      "Action": "s3:GetObject",
      "Resource": [
        "arn:aws:s3:::$S3_BUCKET/.well-known/openid-configuration",
        "arn:aws:s3:::$S3_BUCKET/keys.json"
      ]
    }
  ]
}

Deploy pod identity webhook

The Amazon Pod Identity Webhook configures pods with the necessary environment variables and tokens (via file mounts) to interact with AWS services. The webhook will configure any pod associated with a service account that has an eks-amazonaws.com/role-arn annotation.

  1. Clone amazon-eks-pod-identity-webhook .

  2. Set the $KUBECONFIG environment variable to the path of the EKS Anywhere cluster.

  3. Apply the manifests for the amazon-eks-pod-identity-webhook. The image used here will be pulled from docker.io. Optionally, the image can be imported into (or proxied through) your private registry. Change the IMAGE argument here to your private registry if needed.

    make cluster-up IMAGE=amazon/amazon-eks-pod-identity-webhook:latest
    
  4. Create a service account with an eks.amazonaws.com/role-arn annotation set to the IAM Role created for the OIDC provider.

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: my-serviceaccount
      namespace: default
      annotations:
        # set this with value of OIDC_IAM_ROLE
        eks.amazonaws.com/role-arn: "arn:aws:iam::ACCOUNT_ID:role/s3-reader"
    
        # optional: Defaults to "sts.amazonaws.com" if not set
        eks.amazonaws.com/audience: "sts.amazonaws.com"
    
        # optional: When set to "true", adds AWS_STS_REGIONAL_ENDPOINTS env var
        #   to containers
        eks.amazonaws.com/sts-regional-endpoints: "true"
    
        # optional: Defaults to 86400 for expirationSeconds if not set
        #   Note: This value can be overwritten if specified in the pod
        #         annotation as shown in the next step.
        eks.amazonaws.com/token-expiration: "86400"
    
  5. Finally, apply the my-service-account.yaml file to create your service account.

    kubectl apply -f my-service-account.yaml
    
  6. You can validate IRSA by following IRSA setup and test . Ensure the awscli pod is deployed in the same namespace of ServiceAccount pod-identity-webhook.

Configure the trust relationship for the OIDC provider’s IAM Role

In order to grant certain service accounts access to the desired AWS resources, edit the trust relationship for the OIDC provider’s IAM Role (OIDC_IAM_ROLE) created in the first section, and add in the desired service accounts.

  1. Choose the role in the console to open it for editing.

  2. Choose the Trust relationships tab, and then choose Edit trust relationship.

  3. Find the line that looks similar to the following:

    "$ISSUER_HOSTPATH:aud": "sts.amazonaws.com"
    
  4. Add another condition after that line which looks like the following line. Replace KUBERNETES_SERVICE_ACCOUNT_NAMESPACE and KUBERNETES_SERVICE_ACCOUNT_NAME with the name of your Kubernetes service account and the Kubernetes namespace that the account exists in.

    "$ISSUER_HOSTPATH:sub": "system:serviceaccount:KUBERNETES_SERVICE_ACCOUNT_NAMESPACE:KUBERNETES_SERVICE_ACCOUNT_NAME"
    

    The allow list example below applies my-serviceaccount service account to the default namespace and all service accounts to the observability namespace for the us-west-2 region. Remember to replace Account_ID and S3_BUCKET with the required values.

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Principal": {
                    "Federated": "arn:aws:iam::$Account_ID:oidc-provider/s3.us-west-2.amazonaws.com/$S3_BUCKET"
                },
                "Action": "sts:AssumeRoleWithWebIdentity",
                "Condition": {
                    "StringLike": {
                        "s3.us-west-2.amazonaws.com/$S3_BUCKET:aud": "sts.amazonaws.com",
                        "s3.us-west-2.amazonaws.com/$S3_BUCKET:sub": [
                                "system:serviceaccount:default:my-serviceaccount",
                                "system:serviceaccount:amazon-cloudwatch:*"
                            ]
                        }
                    }
                }
            ]
        }
    
  5. Refer this doc for different ways of configuring one or multiple service accounts through the condition operators in the trust relationship.

  6. Choose Update Trust Policy to finish.

6 - IAM Authentication

EKS Anywhere cluster yaml specification AWS IAM Authenticator reference

AWS IAM Authenticator support (optional)

Provider support details

vSphere Bare Metal Nutanix CloudStack Snow
Supported?

EKS Anywhere can create clusters that support AWS IAM Authenticator-based api server authentication. In order to add IAM Authenticator support, you need to configure your cluster by updating the configuration file before creating the cluster. This is a generic template with detailed descriptions below for reference:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
   name: my-cluster-name
spec:
   ...
   # IAM Authenticator support
   identityProviderRefs:
      - kind: AWSIamConfig
        name: aws-iam-auth-config
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: AWSIamConfig
metadata:
   name: aws-iam-auth-config
spec:
    awsRegion: ""
    backendMode:
        - ""
    mapRoles:
        - roleARN: arn:aws:iam::XXXXXXXXXXXX:role/myRole
          username: myKubernetesUsername
          groups:
          - ""
    mapUsers:
        - userARN: arn:aws:iam::XXXXXXXXXXXX:user/myUser
          username: myKubernetesUsername
          groups:
          - ""
    partition: ""

identityProviderRefs (Under Cluster)

List of identity providers you want configured for the Cluster. This would include a reference to the AWSIamConfig object with the configuration below.

awsRegion (required)

  • Description: awsRegion can be any region in the aws partition that the IAM roles exist in.
  • Type: string

backendMode (required)

  • Description: backendMode configures the IAM authenticator server’s backend mode (i.e. where to source mappings from). We support EKSConfigMap and CRD modes supported by AWS IAM Authenticator, for more details refer to backendMode
  • Type: string
  • Description: When using EKSConfigMap backendMode, we recommend providing either mapRoles or mapUsers to set the IAM role mappings at the time of creation. This input is added to an EKS style ConfigMap. For more details refer to EKS IAM

  • Type: list object

    roleARN, userARN (required)

    • Description: IAM ARN to authenticate to the cluster. roleARN specifies an IAM role and userARN specifies an IAM user.
    • Type: string

    username (required)

    • Description: The Kubernetes username the IAM ARN is mapped to in the cluster. The ARN gets mapped to the Kubernetes cluster permissions associated with the username.
    • Type: string

    groups

    • Description: List of kubernetes user groups that the mapped IAM ARN is given permissions to.
    • Type: list string

partition

  • Description: This field is used to set the aws partition that the IAM roles are present in. Default value is aws.
  • Type: string

7 - OIDC

EKS Anywhere cluster yaml specification OIDC reference

OIDC support (optional)

EKS Anywhere can create clusters that support api server OIDC authentication.

Provider support details

vSphere Bare Metal Nutanix CloudStack Snow
Supported?

In order to add OIDC support, you need to configure your cluster by updating the configuration file to include the details below. The OIDC configuration can be added at cluster creation time, or introduced via a cluster upgrade in VMware and CloudStack.

This is a generic template with detailed descriptions below for reference:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
   name: my-cluster-name
spec:
   ...
   # OIDC support
   identityProviderRefs:
      - kind: OIDCConfig
        name: my-cluster-name
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: OIDCConfig
metadata:
   name: my-cluster-name
spec:
    clientId: ""
    groupsClaim: ""
    groupsPrefix: ""
    issuerUrl: "https://x"
    requiredClaims:
      - claim: ""
        value: ""
    usernameClaim: ""
    usernamePrefix: ""

identityProviderRefs (Under Cluster)

List of identity providers you want configured for the Cluster. This would include a reference to the OIDCConfig object with the configuration below.

clientId (required)

  • Description: ClientId defines the client ID for the OpenID Connect client
  • Type: string

groupsClaim (optional)

  • Description: GroupsClaim defines the name of a custom OpenID Connect claim for specifying user groups
  • Type: string

groupsPrefix (optional)

  • Description: GroupsPrefix defines a string to be prefixed to all groups to prevent conflicts with other authentication strategies
  • Type: string

issuerUrl (required)

  • Description: IssuerUrl defines the URL of the OpenID issuer, only HTTPS scheme will be accepted
  • Type: string

requiredClaims (optional)

List of RequiredClaim objects listed below. Only one is supported at this time.

requiredClaims[0] (optional)

  • Description: RequiredClaim defines a key=value pair that describes a required claim in the ID Token
    • claim
      • type: string
    • value
      • type: string
  • Type: object

usernameClaim (optional)

  • Description: UsernameClaim defines the OpenID claim to use as the user name. Note that claims other than the default (‘sub’) is not guaranteed to be unique and immutable
  • Type: string

usernamePrefix (optional)

  • Description: UsernamePrefix defines a string to be prefixed to all usernames. If not provided, username claims other than ‘email’ are prefixed by the issuer URL to avoid clashes. To skip any prefixing, provide the value ‘-’.
  • Type: string

8 - Proxy

EKS Anywhere cluster yaml specification proxy configuration reference

Proxy support (optional)

Provider support details

vSphere Bare Metal Nutanix CloudStack Snow
Supported?

You can configure EKS Anywhere to use a proxy to connect to the Internet. This is the generic template with proxy configuration for your reference:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
   name: my-cluster-name
spec:
   ...
   proxyConfiguration:
      httpProxy: http-proxy-ip:port
      httpsProxy: https-proxy-ip:port
      noProxy:
      - list of no proxy endpoints

Configuring Docker daemon

EKS Anywhere will proxy for you given the above configuration file. However, to successfully use EKS Anywhere you will also need to ensure your Docker daemon is configured to use the proxy.

This generally means updating your daemon to launch with the HTTPS_PROXY, HTTP_PROXY, and NO_PROXY environment variables.

For an example of how to do this with systemd, please see Docker’s documentation here .

Configuring EKS Anywhere proxy without config file

For commands using a cluster config file, EKS Anywhere will derive its proxy config from the cluster configuration file.

However, for commands that do not utilize a cluster config file, you can set the following environment variables:

export HTTPS_PROXY=https-proxy-ip:port
export HTTP_PROXY=http-proxy-ip:port
export NO_PROXY=no-proxy-domain.com,another-domain.com,localhost

Proxy Configuration Spec Details

proxyConfiguration (required)

  • Description: top level key; required to use proxy.
  • Type: object

httpProxy (required)

  • Description: HTTP proxy to use to connect to the internet; must be in the format IP:port
  • Type: string
  • Example: httpProxy: 192.168.0.1:3218

httpsProxy (required)

  • Description: HTTPS proxy to use to connect to the internet; must be in the format IP:port
  • Type: string
  • Example: httpsProxy: 192.168.0.1:3218

noProxy (optional)

  • Description: list of endpoints that should not be routed through the proxy; can be an IP, CIDR block, or a domain name
  • Type: list of strings
  • Example
  noProxy:
   - localhost
   - 192.168.0.1
   - 192.168.0.0/16
   - .example.com

9 - KubeletConfiguration

EKS Anywhere cluster yaml specification for Kubelet Configuration

Kubelet Configuration Support

Provider support details

vSphere Bare Metal Nutanix CloudStack Snow
Ubuntu 20.04
Ubuntu 22.04
Bottlerocket
RHEL 8.x
RHEL 9.x

You can configure EKS Anywhere to specify Kubelet settings and configure those for control plane and/or worker nodes starting from v0.20.0. This can be done using kubeletConfiguration. The following cluster spec shows an example of how to configure kubeletConfiguration:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
   name: my-cluster-name
spec:
   ...
   ...
 controlPlaneConfiguration:         # Kubelet configuration for Control plane nodes
    kubeletConfiguration:
      kind: KubeletConfiguration
      maxPods: 80
   ...
  workerNodeGroupConfigurations:    # Kubelet configuration for Worker nodes
  - count: 1
    kubeletConfiguration:
      kind: KubeletConfiguration
      maxPods: 85
   ...

kubeletConfiguration should contain the configuration to be used by the kubelet while creating or updating a node. It must contain the kind key with the value KubeletConfiguration for EKS Anywhere to process the settings. This configuration must only be used with valid settings as it may cause unexpected behavior from the Kubelet if misconfigured. EKS Anywhere performs a limited set of data type validations for the Kubelet Configuration, however it is ultimately the user’s responsibility to make sure that valid configuration is set for Kubelet Configuration.

More details on the Kubelet Configuration object and its supported fields can be found here . EKS Anywhere only supports the latest Kubernetes version’s KubeletConfiguration.

Bottlerocket Support

The only provider that supports kubeletConfiguration with Bottlerocket is vSphere. The list of settings that can be configured for Bottlerocket can be found here . This page also describes other various settings like Kubelet Options. The settings supported by Bottlerocket will have information specific to the Kubelet Configuration keyword in there. Refer to the documentation to learn about the supported fields as well as their data types as they may vary from the upstream object’s data types.

Note that this is the preferred and supported way to specify any Kubelet settings from the release v0.20.0 onwards. Previously the hostOSConfiguration.bottlerocketConfiguration.kubernetes field was used to specify Bottlerocket Kubernetes settings. That has been deprecated from v0.20.0

Here’s a list of supported fields by Bottlerocket for Kubelet Configuration -

  • allowedUnsafeSysctls
  • clusterDNSIPs
  • clusterDomain
  • containerLogMaxFiles
  • containerLogMaxSize
  • cpuCFSQuota
  • cpuManagerPolicy
  • cpuManagerPolicyOptions
  • cpuManagerReconcilePeriod
  • eventBurst
  • eventRecordQPS
  • evictionHard
  • evictionMaxPodGracePeriod
  • evictionSoft
  • evictionSoftGracePeriod
  • imageGCHighThresholdPercent
  • imageGCLowThresholdPercent
  • kubeAPIBurst
  • kubeAPIQPS
  • kubeReserved
  • maxPods
  • memoryManagerPolicy
  • podPidsLimit
  • providerID
  • registryBurst
  • registryPullQPS
  • shutdownGracePeriod
  • shutdownGracePeriodCriticalPods
  • systemReserved
  • topologyManagerPolicy
  • topologyManagerScope

Special fields

Duplicate fields

The clusterNetwork.dns.resolvConf is the file path to a file containing a custom DNS resolver configuration. This can now be provided in the Kubelet Configuration using the resolvConf field. Note that if both these fields are set, the Kubelet Configuration’s field will take precendence and override the value from the clusterNetwork.dns.resolvConf.

Blocked fields

Fields like providerID or cloudProvider are set by EKS Anywhere and can’t be set by users. This is to maintain seamless support for all providers.

Node Rollouts

Adding, updating, or deleting the Kubelet Configuration will cause node rollouts to the respective nodes that the configuration affects. This is especially important to consider in providers like Baremetal since the node rollouts that are caused by the Kubelet config changes could require extra hardware provisioned depending on your rollout strategy.

10 - MachineHealthCheck

EKS Anywhere cluster yaml specification for MachineHealthCheck configuration

MachineHealthCheck Support

Provider support details

vSphere Bare Metal Nutanix CloudStack Snow
Supported?

You can configure EKS Anywhere to specify timeouts and maxUnhealthy values for machine health checks.

A MachineHealthCheck (MHC) is a resource in Cluster API which allows users to define conditions under which Machines within a Cluster should be considered unhealthy. A MachineHealthCheck is defined on a management cluster and scoped to a particular workload cluster.

Note: Even though the MachineHealthCheck configuration in the EKS-A spec is optional, MachineHealthChecks are still installed for all clusters using the default values mentioned below.

EKS Anywhere allows users to have granular control over MachineHealthChecks in their cluster configuration, with default values (derived from Cluster API) being applied if the MHC is not configured in the spec. The top-level machineHealthCheck field governs the global MachineHealthCheck settings for all Machines (control-plane and worker). These global settings can be overridden through the nested machineHealthCheck field in the control plane configuration and each worker node configuration. If the nested MHC fields are not configured, then the top-level settings are applied to the respective Machines.

The following cluster spec shows an example of how to configure health check timeouts and maxUnhealthy:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
   name: my-cluster-name
spec:
   ...
  machineHealthCheck:               # Top-level MachineHealthCheck configuration
    maxUnhealthy: "60%"
    nodeStartupTimeout: "10m0s"
    unhealthyMachineTimeout: "5m0s"
   ...
 controlPlaneConfiguration:         # MachineHealthCheck configuration for Control plane
    machineHealthCheck:
      maxUnhealthy: 100%
      nodeStartupTimeout: "15m0s"
      unhealthyMachineTimeout: 10m
   ...
  workerNodeGroupConfigurations:
  - count: 1
    name: md-0
    machineHealthCheck:             # MachineHealthCheck configuration for Worker Node Group 0
      maxUnhealthy: 100%
      nodeStartupTimeout: "10m0s"
      unhealthyMachineTimeout: 20m
  - count: 1
    name: md-1
    machineHealthCheck:             # MachineHealthCheck configuration for Worker Node Group 1
      maxUnhealthy: 100%
      nodeStartupTimeout: "10m0s"
      unhealthyMachineTimeout: 20m
   ...

MachineHealthCheck Spec Details

machineHealthCheck (optional)

  • Description: top-level key; required to configure global MachineHealthCheck timeouts and maxUnhealthy.
  • Type: object

machineHealthCheck.maxUnhealthy (optional)

  • Description: determines the maximum permissible number or percentage of unhealthy Machines in a cluster before further remediation is prevented. This ensures that MachineHealthChecks only remediate Machines when the cluster is healthy.
  • Default: 100% for control plane machines, 40% for worker nodes (Cluster API defaults).
  • Type: integer (count) or string (percentage)

machineHealthCheck.nodeStartupTimeout (optional)

  • Description: determines how long a MachineHealthCheck should wait for a Node to join the cluster, before considering a Machine unhealthy.
  • Default: 20m0s for Tinkerbell provider, 10m0s for all other providers.
  • Minimum Value (If configured): 30s
  • Type: string

machineHealthCheck.unhealthyMachineTimeout (optional)

  • Description: determines how long the unhealthy Node conditions (e.g., Ready=False, Ready=Unknown) should be matched for, before considering a Machine unhealthy.
  • Default: 5m0s
  • Type: string

controlPlaneConfiguration.machineHealthCheck (optional)

  • Description: Control plane level configuration for MachineHealthCheck timeouts and maxUnhealthy values.
  • Type: object

controlPlaneConfiguration.machineHealthCheck.maxUnhealthy (optional)

  • Description: determines the maximum permissible number or percentage of unhealthy control plane Machines in a cluster before further remediation is prevented. This ensures that MachineHealthChecks only remediate Machines when the cluster is healthy.
  • Default: Top-level MHC maxUnhealthy if set or 100% otherwise.
  • Type: integer (count) or string (percentage)

controlPlaneConfiguration.machineHealthCheck.nodeStartupTimeout (optional)

  • Description: determines how long a MachineHealthCheck should wait for a control plane Node to join the cluster, before considering the Machine unhealthy.
  • Default: Top-level MHC nodeStartupTimeout if set or 20m0s for Tinkerbell provider, 10m0s for all other providers otherwise.
  • Minimum Value (if configured): 30s
  • Type: string

controlPlaneConfiguration.machineHealthCheck.unhealthyMachineTimeout (optional)

  • Description: determines how long the unhealthy conditions (e.g., Ready=False, Ready=Unknown) should be matched for a control plane Node, before considering the Machine unhealthy.
  • Default: Top-level MHC nodeStartupTimeout if set or 5m0s otherwise.
  • Type: string

workerNodeGroupConfigurations.machineHealthCheck (optional)

  • Description: Worker node level configuration for MachineHealthCheck timeouts and maxUnhealthy values.
  • Type: object

workerNodeGroupConfigurations.machineHealthCheck.maxUnhealthy (optional)

  • Description: determines the maximum permissible number or percentage of unhealthy worker Machines in a cluster before further remediation is prevented. This ensures that MachineHealthChecks only remediate Machines when the cluster is healthy.
  • Default: Top-level MHC maxUnhealthy if set or 40% otherwise.
  • Type: integer (count) or string (percentage)

workerNodeGroupConfigurations.machineHealthCheck.nodeStartupTimeout (optional)

  • Description: determines how long a MachineHealthCheck should wait for a worker Node to join the cluster, before considering the Machine unhealthy.
  • Default: Top-level MHC nodeStartupTimeout if set or 20m0s for Tinkerbell provider, 10m0s for all other providers otherwise.
  • Minimum Value (if configured): 30s
  • Type: string

workerNodeGroupConfigurations.machineHealthCheck.unhealthyMachineTimeout (optional)

  • Description: determines how long the unhealthy conditions (e.g., Ready=False, Ready=Unknown) should be matched for a worker Node, before considering the Machine unhealthy.
  • Default: Top-level MHC nodeStartupTimeout if set or 5m0s otherwise.
  • Type: string

11 - Registry Mirror

EKS Anywhere cluster specification for registry mirror configuration

Registry Mirror Support (optional)

Provider support details

vSphere Bare Metal Nutanix CloudStack Snow
Supported?

You can configure EKS Anywhere to use a local registry mirror for its dependencies. When a registry mirror is configured in the EKS Anywhere cluster specification, EKS Anywhere will use it instead of defaulting to Amazon ECR for its dependencies. For details on how to configure your local registry mirror for EKS Anywhere, see the Configure local registry mirror section.

See the airgapped documentation page for instructions on downloading and importing EKS Anywhere dependencies to a local registry mirror.

Registry Mirror Cluster Spec

The following cluster spec shows an example of how to configure registry mirror:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
  name: my-cluster-name
spec:
  ...
  registryMirrorConfiguration:
    endpoint: <private registry IP or hostname>
    port: <private registry port>
    ociNamespaces:
      - registry: <upstream registry IP or hostname>
        namespace: <namespace in private registry>
      ...
    caCertContent: |
      -----BEGIN CERTIFICATE-----
      MIIF1DCCA...
      ...
      es6RXmsCj...
      -----END CERTIFICATE-----        

Registry Mirror Cluster Spec Details

registryMirrorConfiguration (optional)

  • Description: top level key; required to use a private registry.
  • Type: object

endpoint (required)

  • Description: IP address or hostname of the private registry for pulling images
  • Type: string
  • Example: endpoint: 192.168.0.1

port (optional)

  • Description: port for the private registry. This is an optional field. If a port is not specified, the default HTTPS port 443 is used
  • Type: string
  • Example: port: 443

ociNamespaces (optional)

  • Description: when you need to mirror multiple registries, you can map each upstream registry to the “namespace” of its mirror. The namespace is appended with the endpoint, <endpoint>/<namespace> to setup the mirror for the registry specified. Note while using ociNamespaces, you need to specify all the registries that need to be mirrored. This includes an entry for the public.ecr.aws registry to pull EKS Anywhere images from.

  • Type: array

  • Example:

    ociNamespaces:
      - registry: "public.ecr.aws"
        namespace: ""
      - registry: "783794618700.dkr.ecr.us-west-2.amazonaws.com"
        namespace: "curated-packages"
    

caCertContent (optional)

  • Description: certificate Authority (CA) Certificate for the private registry . When using self-signed certificates it is necessary to pass this parameter in the cluster spec. This must be the individual public CA cert used to sign the registry certificate. This will be added to the cluster nodes so that they are able to pull images from the private registry.

    It is also possible to configure CACertContent by exporting an environment variable:
    export EKSA_REGISTRY_MIRROR_CA="/path/to/certificate-file"

  • Type: string

  • Example:

    CACertContent: |
      -----BEGIN CERTIFICATE-----
      MIIF1DCCA...
      ...
      es6RXmsCj...
      -----END CERTIFICATE-----  
    

authenticate (optional)

  • Description: optional field to authenticate with a private registry. When using private registries that require authentication, it is necessary to set this parameter to true in the cluster spec.
  • Type: boolean

When this value is set to true, the following environment variables need to be set:

export REGISTRY_USERNAME=<username>
export REGISTRY_PASSWORD=<password>

insecureSkipVerify (optional)

  • Description: optional field to skip the registry certificate verification. Only use this solution for isolated testing or in a tightly controlled, air-gapped environment. Currently only supported for Ubuntu and RHEL OS.
  • Type: boolean

Configure local registry mirror

Project configuration

The following projects must be created in your registry before importing the EKS Anywhere images:

bottlerocket
eks-anywhere
eks-distro
isovalent
cilium-chart

For example, if a registry is available at private-registry.local, then the following projects must be created.

https://private-registry.local/bottlerocket
https://private-registry.local/eks-anywhere
https://private-registry.local/eks-distro
https://private-registry.local/isovalent
https://private-registry.local/cilium-chart

Admin machine configuration

You must configure the Admin machine with the information it needs to communicate with your registry.

Add the registry’s CA certificate to the list of CA certificates on the Admin machine if your registry uses self-signed certificates.

If your registry uses authentication, the following environment variables must be set on the Admin machine before running the eksctl anywhere import images command.

export REGISTRY_USERNAME=<username>
export REGISTRY_PASSWORD=<password>

12 - Autoscaling configuration

EKS Anywhere cluster yaml autoscaling specification reference

EKS Anywhere supports autoscaling worker node groups using the Kubernetes Cluster Autoscaler . The Kubernetes Cluster Autoscaler Curated Package is an image and helm chart installed via the Curated Packages Controller

The helm chart utilizes the Cluster Autoscaler clusterapi mode to scale resources.

Configure an EKS Anywhere worker node group to be picked up by a Cluster Autoscaler deployment by adding autoscalingConfiguration block to the workerNodeGroupConfiguration.

    apiVersion: anywhere.eks.amazonaws.com/v1alpha1
    kind: Cluster
    metadata:
      name: my-cluster-name
    spec:
      workerNodeGroupConfigurations:
        - name: md-0
          autoscalingConfiguration:
            minCount: 1
            maxCount: 5
          machineGroupRef:
            kind: VSphereMachineConfig
            name: worker-machine-a
        - name: md-1
          autoscalingConfiguration:
            minCount: 1
            maxCount: 3
          machineGroupRef:
            kind: VSphereMachineConfig
            name: worker-machine-b

Note that if count is specified for the worker node group, it’s value will be ignored during cluster creation as well as cluster upgrade. If only one of minCount or maxCount is specified, then the other will have a default value of 0 and count will have a default value of minCount.

EKS Anywhere automatically applies the following annotations to your MachineDeployment objects for worker node groups with autoscaling enabled. The Cluster Autoscaler component uses these annotations to identify which node groups to autoscale. If a node group is not autoscaling as expected, check for these annotations on the MachineDeployment to troubleshoot.

cluster.x-k8s.io/cluster-api-autoscaler-node-group-min-size: <minCount>
cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size: <maxCount>

13 - Skipping validations configuration

EKS Anywhere cluster annotations to skip validations

EKS Anywhere runs a set of validations while performing cluster operations. Some of these validations can be chosen to be skipped.

One such validation EKS Anywhere runs is a check for whether cluster’s control plane ip is in use or not.

  • If a cluster is being created using the EKS Anywhere cli, this validation can be skipped by using the --skip-ip-check flag or by setting the below annotation on the Cluster object.
  • If a workload cluster is being created using tools like kubectl or GitOps, the validation can only be skipped by adding the below annotation.

Configure an EKS Anywhere cluster to skip the validation for checking the uniqueness of the control plane IP by using the anywhere.eks.amazonaws.com/skip-ip-check annotation and setting the value to true like shown below.

    apiVersion: anywhere.eks.amazonaws.com/v1alpha1
    kind: Cluster
    metadata:
      annotations:
        anywhere.eks.amazonaws.com/skip-ip-check: "true"
      name: my-cluster-name
    spec:
    .
    .
    .

Note that this annotation is also automatically set if you use the --skip-ip-check flag while running the EKS Anywhere create cluster command.

14 - GitOps

Configuration reference for GitOps cluster management.

GitOps Support (Optional)

Provider support details

vSphere Bare Metal Nutanix CloudStack Snow
Supported?

EKS Anywhere can create clusters that supports GitOps configuration management with Flux. In order to add GitOps support, you need to configure your cluster by specifying the configuration file with gitOpsRef field when creating or upgrading the cluster. We currently support two types of configurations: FluxConfig and GitOpsConfig.

Flux Configuration

The flux configuration spec has three optional fields, regardless of the chosen git provider.

Flux Configuration Spec Details

systemNamespace (optional)

  • Description: Namespace in which to install the gitops components in your cluster. Defaults to flux-system
  • Type: string

clusterConfigPath (optional)

  • Description: The path relative to the root of the git repository where EKS Anywhere will store the cluster configuration files. Defaults to the cluster name
  • Type: string

branch (optional)

  • Description: The branch to use when committing the configuration. Defaults to main
  • Type: string

EKS Anywhere currently supports two git providers for FluxConfig: Github and Git.

Github provider

Please note that for the Flux config to work successfully with the Github provider, the environment variable EKSA_GITHUB_TOKEN needs to be set with a valid GitHub PAT . This is a generic template with detailed descriptions below for reference:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
  name: my-cluster-name
  namespace: default
spec:
  ...
  #GitOps Support
  gitOpsRef:
    name: my-github-flux-provider
    kind: FluxConfig
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: FluxConfig
metadata:
  name: my-github-flux-provider
  namespace: default
spec:
  systemNamespace: "my-alternative-flux-system-namespace"
  clusterConfigPath: "path-to-my-clusters-config"
  branch: "main"
  github:
    personal: true
    repository: myClusterGitopsRepo
    owner: myGithubUsername

---

github Configuration Spec Details

repository (required)

  • Description: The name of the repository where EKS Anywhere will store your cluster configuration, and sync it to the cluster. If the repository exists, we will clone it from the git provider; if it does not exist, we will create it for you.
  • Type: string

owner (required)

  • Description: The owner of the Github repository; either a Github username or Github organization name. The Personal Access Token used must belong to the owner if this is a personal repository, or have permissions over the organization if this is not a personal repository.
  • Type: string

personal (optional)

  • Description: Is the repository a personal or organization repository? If personal, this value is true; otherwise, false. If using an organizational repository (e.g. personal is false) the owner field will be used as the organization when authenticating to github.com
  • Default: true
  • Type: boolean

Git provider

Before you create a cluster using the Git provider, you will need to set and export the EKSA_GIT_KNOWN_HOSTS and EKSA_GIT_PRIVATE_KEY environment variables.

EKSA_GIT_KNOWN_HOSTS

EKS Anywhere uses the provided known hosts file to verify the identity of the git provider when connecting to it with SSH. The EKSA_GIT_KNOWN_HOSTS environment variable should be a path to a known hosts file containing entries for the git server to which you’ll be connecting.

For example, if you wanted to provide a known hosts file which allows you to connect to and verify the identity of github.com using a private key based on the key algorithm ecdsa, you can use the OpenSSH utility ssh-keyscan to obtain the known host entry used by github.com for the ecdsa key type. EKS Anywhere supports ecdsa, rsa, and ed25519 key types, which can be specified via the sshKeyAlgorithm field of the git provider config.

ssh-keyscan -t ecdsa github.com >> my_eksa_known_hosts

This will produce a file which contains known-hosts entries for the ecdsa key type supported by github.com, mapping the host to the key-type and public key.

github.com ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEmKSENjQEezOmxkZMy7opKgwFB9nkt5YRrYMjNuG5N87uRgg6CLrbo5wAdT/y6v0mKV0U2w0WZ2YB/++Tpockg=

EKS Anywhere will use the content of the file at the path EKSA_GIT_KNOWN_HOSTS to verify the identity of the remote git server, and the provided known hosts file must contain an entry for the remote host and key type.

EKSA_GIT_PRIVATE_KEY

The EKSA_GIT_PRIVATE_KEY environment variable should be a path to the private key file associated with a valid SSH public key registered with your Git provider. This key must have permission to both read from and write to your repository. The key can use the key algorithms rsa, ecdsa, and ed25519.

This key file must have restricted file permissions, allowing only the owner to read and write, such as octal permissions 600.

If your private key file is passphrase protected, you must also set EKSA_GIT_SSH_KEY_PASSPHRASE with that value.

This is a generic template with detailed descriptions below for reference:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
  name: my-cluster-name
  namespace: default
spec:
  ...
  #GitOps Support
  gitOpsRef:
    name: my-git-flux-provider
    kind: FluxConfig
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: FluxConfig
metadata:
  name: my-git-flux-provider
  namespace: default
spec:
  systemNamespace: "my-alternative-flux-system-namespace"
  clusterConfigPath: "path-to-my-clusters-config"
  branch: "main"
  git:
    repositoryUrl: ssh://git@github.com/myAccount/myClusterGitopsRepo.git
    sshKeyAlgorithm: ecdsa
---

git Configuration Spec Details

repositoryUrl (required)

  • Description: The URL of an existing repository where EKS Anywhere will store your cluster configuration and sync it to the cluster. For private repositories, the SSH URL will be of the format ssh://git@provider.com/$REPO_OWNER/$REPO_NAME.git
  • Type: string
  • Value: A common repositoryUrl value can be of the format ssh://git@provider.com/$REPO_OWNER/$REPO_NAME.git. This may differ from the default SSH URL given by your provider. Consider these differences between github and CodeCommit URLs:
    • The github.com user interface provides an SSH URL containing a : before the repository owner, rather than a /. Make sure to replace this : with a /, if present.
    • The CodeCommit SSH URL must include SSH-KEY-ID in format ssh://<SSH-Key-ID>@git-codecommit.<region>.amazonaws.com/v1/repos/<repository>.

sshKeyAlgorithm (optional)

  • Description: The SSH key algorithm of the private key specified via EKSA_PRIVATE_KEY_FILE. Defaults to ecdsa
  • Type: string

Supported SSH key algorithm types are ecdsa, rsa, and ed25519.

Be sure that this SSH key algorithm matches the private key file provided by EKSA_GIT_PRIVATE_KEY_FILE and that the known hosts entry for the key type is present in EKSA_GIT_KNOWN_HOSTS.

GitOps Configuration

Please note that for the GitOps config to work successfully the environment variable EKSA_GITHUB_TOKEN needs to be set with a valid GitHub PAT . This is a generic template with detailed descriptions below for reference:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
  name: my-cluster-name
  namespace: default
spec:
  ...
  #GitOps Support
  gitOpsRef:
    name: my-gitops
    kind: GitOpsConfig
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: GitOpsConfig
metadata:
  name: my-gitops
  namespace: default
spec:
  flux:
    github:
      personal: true
      repository: myClusterGitopsRepo
      owner: myGithubUsername
      fluxSystemNamespace: ""
      clusterConfigPath: ""

GitOps Configuration Spec Details

flux (required)

  • Description: our supported gitops provider is flux. This is the only supported value.
  • Type: object

Flux Configuration Spec Details

github (required)

  • Description: github is the only currently supported git provider. This defines your github configuration to be used by EKS Anywhere and flux.
  • Type: object

github Configuration Spec Details

repository (required)

  • Description: The name of the repository where EKS Anywhere will store your cluster configuration, and sync it to the cluster. If the repository exists, we will clone it from the git provider; if it does not exist, we will create it for you.
  • Type: string

owner (required)

  • Description: The owner of the Github repository; either a Github username or Github organization name. The Personal Access Token used must belong to the owner if this is a personal repository, or have permissions over the organization if this is not a personal repository.
  • Type: string

personal (optional)

  • Description: Is the repository a personal or organization repository? If personal, this value is true; otherwise, false. If using an organizational repository (e.g. personal is false) the owner field will be used as the organization when authenticating to github.com
  • Default: true
  • Type: boolean

clusterConfigPath (optional)

  • Description: The path relative to the root of the git repository where EKS Anywhere will store the cluster configuration files.
  • Default: clusters/$MANAGEMENT_CLUSTER_NAME
  • Type: string

fluxSystemNamespace (optional)

  • Description: Namespace in which to install the gitops components in your cluster.
  • Default: flux-system.
  • Type: string

branch (optional)

  • Description: The branch to use when committing the configuration.
  • Default: main
  • Type: string

15 - Package controller

EKS Anywhere cluster yaml specification for package controller configuration

Package Controller Configuration (optional)

You can specify EKS Anywhere package controller configurations. For more on the curated packages and the package controller, visit the package management documentation.

The following cluster spec shows an example of how to configure package controller:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
   name: my-cluster-name
spec:
   ...
  packages:
    disable: false
    controller:
      resources:
        requests:
          cpu: 100m
          memory: 50Mi
        limits:
          cpu: 750m
          memory: 450Mi


Package Controller Configuration Spec Details

packages (optional)

  • Description: Top level key; required controller configuration.
  • Type: object

packages.disable (optional)

  • Description: Disable the package controller.
  • Type: bool
  • Example: disable: true

packages.controller (optional)

  • Description: Disable the package controller.
  • Type: object

packages.controller.resources (optional)

  • Description: Resources for the package controller.
  • Type: object

packages.controller.resources.limits (optional)

  • Description: Resource limits.
  • Type: object

packages.controller.resources.limits.cpu (optional)

  • Description: CPU limit.
  • Type: string

packages.controller.resources.limits.memory (optional)

  • Description: Memory limit.
  • Type: string

packages.controller.resources.requests (optional)

  • Description: Requested resources.
  • Type: object

packages.controller.resources.requests.cpu (optional)

  • Description: Requested cpu.
  • Type: string

packages.controller.resources.limits.memory (optional)

  • Description: Requested memory.
  • Type: string

packages.cronjob (optional)

  • Description: Disable the package controller.
  • Type: object

packages.cronjob.disable (optional)

  • Description: Disable the cron job.
  • Type: bool
  • Example: disable: true

packages.cronjob.resources (optional)

  • Description: Resources for the package controller.
  • Type: object

packages.cronjob.resources.limits (optional)

  • Description: Resource limits.
  • Type: object

packages.cronjob.resources.limits.cpu (optional)

  • Description: CPU limit.
  • Type: string

packages.cronjob.resources.limits.memory (optional)

  • Description: Memory limit.
  • Type: string

packages.cronjob.resources.requests (optional)

  • Description: Requested resources.
  • Type: object

packages.cronjob.resources.requests.cpu (optional)

  • Description: Requested cpu.
  • Type: string

packages.cronjob.resources.limits.memory (optional)

  • Description: Requested memory.
  • Type: string

16 - API Server Extra Args

EKS Anywhere cluster yaml specification for Kubernetes API Server Extra Args reference

API Server Extra Args support (optional)

As of EKS Anywhere version v0.20.0, you can pass additional flags to configure the Kubernetes API server in your EKS Anywhere clusters.

Provider support details

vSphere Bare Metal Nutanix CloudStack Snow
Supported?

In order to configure a cluster with API Server extra args, you need to configure your cluster by updating the cluster configuration file to include the details below. The feature flag API_SERVER_EXTRA_ARGS_ENABLED=true needs to be set as an environment variable.

This is a generic template with some example API Server extra args configuration below for reference:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
   name: my-cluster-name
spec:
    ...
    controlPlaneConfiguration:
        apiServerExtraArgs:
            ...
            disable-admission-plugins: "DefaultStorageClass,DefaultTolerationSeconds"
            enable-admission-plugins: "NamespaceAutoProvision,NamespaceExists"

The above example configures the disable-admission-plugins and enable-admission-plugins options of the API Server to enable additional admission plugins or disable some of the default ones. You can configure any of the API Server options using the above template.

controlPlaneConfiguration.apiServerExtraArgs (optional)

Reference the Kubernetes documentation for the list of flags that can be configured for the Kubernetes API server in EKS Anywhere