## Workers Consider the following `spec.topology.workers` from a sample Cluster resource: ```yaml /// cluster.yaml apiVersion: cluster.x-k8s.io/v1beta1 kind: Cluster metadata: name: mycluster spec: clusterNetwork: services: cidrBlocks: ['10.128.0.0/12'] pods: cidrBlocks: ['192.168.0.0/16'] serviceDomain: 'cluster.local' topology: class: hetzner-apalla-1-34-v6 version: v1.34.6 controlPlane: replicas: 3 workers: // [!code focus:23] machineDeployments: - class: workeramd64hcloud name: md-0 replicas: 5 failureDomain: nbg1 metadata: labels: node-role.kubernetes.io/backend: "true" variables: overrides: - name: workerMachineTypeHcloud value: cpx42 - name: workerMachinePlacementGroupNameHcloud value: md-0 - class: workeramd64hcloud name: md-1 replicas: 3 failureDomain: nbg1 variables: overrides: - name: workerMachineTypeHcloud value: cx53 ``` A machine deployment has a single machine type. Therefore, you must create a different machine deployment for each machine type you want to use. In the example above, we have three replicas of cx53 machines and five replicas of cpx42 machines. You can define as many machine deployments as you like, and scale them separately by setting the number of `replicas`. It is also possible to use an autoscaler. If you are interested in doing so, please read [How to Use Cluster Autoscaler](/docs/hetzner/apalla/how-to-guides/cluster-autoscaler). Each machine deployment requires a unique name. This name will be reflected in the machine names in Hetzner. If you want to assign a label to all nodes of a machine deployment, you can do so in `metadata.labels`: ```yaml metadata: labels: node.cluster.x-k8s.io/foo: "bar" node.cluster.x-k8s.io/baz: "qux" ``` ## Controlplanes To set the amount of controlplane nodes, change the value of `spec.topology.controlPlane.replicas` in your Cluster resource. ### Machine types We support all VM types with at least 4GB RAM, shared or dedicated vCPU, including [**arm64** machines](/docs/hetzner/apalla/how-to-guides/server-management/using-arm-servers). For control plane nodes, we recommend at least 4 vCPUs and 8 GB of RAM, using the CPX (Regular Performance) line. This corresponds to a CPX32 instance, though for most clusters, a CPX42 offers better performance and headroom. **Regular Performance series:** | Type | vCPUs | RAM | SSD | | ----- | ------ | ----- | ------ | | CPX22 | 2 AMD | 4 GB | 80 GB | | CPX32 | 4 AMD | 8 GB | 160 GB | | CPX42 | 8 AMD | 16 GB | 320 GB | | CPX52 | 12 AMD | 24 GB | 480 GB | | CPX62 | 16 AMD | 32 GB | 640 GB | **Dedicated series:** | Type | vCPUs | RAM | SSD | | ----- | ------ | ------ | ------ | | CCX13 | 2 AMD | 8 GB | 80 GB | | CCX23 | 4 AMD | 16 GB | 160 GB | | CCX33 | 8 AMD | 32 GB | 240 GB | | CCX43 | 16 AMD | 64 GB | 360 GB | | CCX53 | 32 AMD | 128 GB | 600 GB | | CCX63 | 48 AMD | 192 GB | 960 GB | **Cost-Optimized series:** | Type | vCPUs | RAM | SSD | | ---- | -------- | ----- | ------ | | CX33 | 4 Intel/AMD | 8 GB | 80 GB | | CX43 | 8 Intel/AMD | 16 GB | 160 GB | | CX53 | 16 Intel/AMD | 32 GB | 320 GB | | CAX21 | 4 Ampere | 8 GB | 80 GB | | CAX31 | 8 Ampere | 16 GB | 160 GB | | CAX41 | 16 Ampere | 32 GB | 320 GB | {% callout %} The CX line servers are focused on cost efficiency and as such run on older hardware, have limited availability and non-standard performance. Because of that, we only recommend using the CX server types for autoscaling or variable workloads that can handle changing CPU performance. {% /callout %} ### Note on regions We only support controlplane machines in the same region. Load Balancers at Hetzner are always in a single region, so there is no real benefit to having nodes in multiple regions because there is a single point of failure in the zone of your Load Balancer. In the event of an outage, even if your nodes are distributed, access to your cluster will be halted if your Load Balancer region is affected. Having your nodes in a single region improves latency. This is relevant since etcd is latency sensitive, so this reduction will outweigh any benefits you might get from having a multi-region cluster.