Adding HCloud servers

Workers

Consider the following spec.topology.workers from a sample Cluster resource:

yaml
workers: machineDeployments: - class: workeramd64hcloud name: md-0 replicas: 5 failureDomain: nbg1 metadata: labels: node-role.kubernetes.io/backend: "true" variables: overrides: - name: workerMachineTypeHcloud value: cpx31 - name: workerMachinePlacementGroupNameHcloud value: md-0 - class: workeramd64hcloud name: md-1 replicas: 3 failureDomain: nbg1 variables: overrides: - name: workerMachineTypeHcloud value: cx41

A machine deployment has a single machine type. Therefore, you must create a different machine deployment for each machine type you want to use. In the example above, we have three replicas of cx41 machines and five replicas of cpx31 machines.

You can define as many machine deployments as you like, and scale them separately by setting the number of replicas . It is also possible to use an autoscaler. If you are interested in doing so, please read How to Use Cluster Autoscaler .

Each machine deployment requires a unique name. This name will be reflected in the machine names in Hetzner.

If you want to assign a label to all nodes of a machine deployment, you can do so in metadata.labels .

For placement groups, set them in variables.overrides :

yaml
- name: workerMachinePlacementGroupNameHcloud value: placementGroupName
note

Placement Groups are used to control the distribution of virtual servers in Hetzner datacenters. If you want to know more, you can read the Hetzner documentation .

The name should match an existing placement group, created in your Cluster resource topology.variables :

yaml
- name: hcloudPlacementGroups value: - name: placementGroupName type: spread

Controlplanes

To set the amount of controlplane nodes, change the value of spec.topology.controlPlane.replicas in your Cluster resource.

Machine types

We support all x86 architecture VM types, shared or dedicated vCPU, including arm64 machines .

Type vCPUs RAM SSD
CPX11 2 AMD 2 GB 40 GB
CX22 2 Intel 4 GB 40 GB
CPX21 3 AMD 4 GB 80 GB
CX32 2 Intel 8 GB 80 GB
CPX31 4 AMD 8 GB 160 GB
CX42 4 Intel 16 GB 160 GB
CPX41 8 AMD 16 GB 240 GB
CX52 8 Intel 32 GB 240 GB
CPX51 16 AMD 32 GB 360 GB
CAX11 2 Ampere 4 GB 40 GB
CAX21 4 Ampere 8 GB 80 GB
CAX31 8 Ampere 16 GB 160 GB
CAX41 16 Ampere 32 GB 320 GB
CCX13 2 AMD 8 GB 80 GB
CCX23 4 AMD 16 GB 160 GB
CCX33 8 AMD 32 GB 240 GB
CCX43 16 AMD 64 GB 360 GG
CCX53 32 AMD 128 GB 600 GB
CCX63 48 AMD 192 GB 960 GB

For controlplane nodes, we recommend a minimum of 4 vCPU cores and 8GB of RAM. This would be a CX32 machine.

Note on regions

We only support controlplane machines in the same region.

Load Balancers at Hetzner are always in a single region, so there is no real benefit to having nodes in multiple regions because there is a single point of failure in the zone of your Load Balancer. In the event of an outage, even if your nodes are distributed, access to your cluster will be halted if your Load Balancer region is affected.

Having your nodes in a single region improves latency. This is relevant since etcd is latency sensitive, so this reduction will outweigh any benefits you might get from having a multi-region cluster.