## Introduction Syself is an [SCS compatible](https://docs.scs.community/standards/certification/overview/#scs-compatible-kaas) Kubernetes-as-a-Service provider. The [SCS (Sovereign Cloud Stack)](https://sovereigncloudstack.org/en/) is a European initiative for an open, transparent and vendor-neutral cloud ecosystem that guarantees sovereignty. In order to be conformant, the platform needs to fulfill all requirements in the SCS test suite. One of these requirements is the distribution of nodes across physical hosts for enhanced reliability. This guide will explain how you can make your own clusters SCS-compatible by using [placement groups](/docs/hetzner/apalla/how-to-guides/server-management/placement-groups) or [bare metal control planes](/docs/hetzner/apalla/how-to-guides/cluster-configuration/baremetal-control-planes). ## Prerequisites It is suggested that you go through the Getting started section of the docs before reading this guide. This guide assumes you have already completed the [Prerequisites](/docs/hetzner/apalla/getting-started/prerequisites) described in there, have [access to the management cluster](/docs/hetzner/apalla/getting-started/accessing-the-management-cluster) and have done the [account preparation](/docs/hetzner/apalla/getting-started/hetzner-account-preparation). For creating clusters, you also need to [apply the ClusterStacks](/docs/hetzner/apalla/getting-started/creating-clusters#step-1-applying-the-clusterstack). ## How to make your cluster SCS-compatible The SCS guidelines require your cluster control plane nodes to be distributed across different physical hosts. This can be achieved in two ways: using placement groups or bare metal control planes. ### Using placement groups The example `cluster.yaml` below is SCS-compatible. The important parts, which are related to control plane replica count and placement groups, are highlighted: ```yaml /// cluster.yaml apiVersion: cluster.x-k8s.io/v1beta1 kind: Cluster metadata: name: example spec: clusterNetwork: services: cidrBlocks: ['10.128.0.0/12'] pods: cidrBlocks: ['192.168.0.0/16'] serviceDomain: 'cluster.local' topology: class: hetzner-apalla-1-34-v6 version: v1.34.6 controlPlane: replicas: 3 workers: machineDeployments: - class: workeramd64hcloud name: md-0 replicas: 1 failureDomain: nbg1 variables: overrides: - name: workerMachineTypeHcloud value: cpx42 variables: - name: region value: nbg1 - name: controlPlaneMachineTypeHcloud value: cpx42 - name: hcloudPlacementGroups // [!code ++] value: // [!code ++] - name: controlPlaneSpread // [!code ++] type: spread // [!code ++] - name: controlPlanePlacementGroupNameHcloud // [!code ++] value: controlPlaneSpread // [!code ++] ``` And apply it to the management cluster with: {% terminal height="6rem" steps="[{\"command\":\"kubectl apply -f cluster.yaml\",\"delay\":200,\"output\":\"cluster.cluster.x-k8s.io/example created\"}]" /%} Cluster creation will take a few more minutes. You can monitor the process by looking at the machine objects: {% terminal height="13rem" steps="[{\"command\":\"kubectl get machines\",\"delay\":500,\"output\":\"NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION\\nexample-jndgf-r4k7v example example-hb2wb-pb6d4 hcloud://12345678 Running 10m v1.34.6\\nexample-jndgf-sb6b8 example example-hb2wb-2g9wq hcloud://87654321 Running 8m v1.34.6\\nexample-jndgf-vhbsb example example-hb2wb-bnkjh hcloud://12348765 Running 5m v1.34.6\\nexample-md-0-zw8ln-pxztl-j4stb example example-md-0-bp4v2-7r2sv hcloud://43215678 Running 4m v1.34.6\"}]" /%} These represent actual machines in your Hetzner project. If all of them are in the `Running` phase, it means your cluster is ready! ### Using bare metal control planes When using bare metal control planes, every node is tied to its own physical bare metal host, meaning they are also SCS-compatible. The example `cluster.yaml` below represents a cluster using bare metal control planes. The important parts, which are related to control plane replica count and class, are highlighted: ```yaml /// cluster.yaml apiVersion: cluster.x-k8s.io/v1beta1 kind: Cluster metadata: name: example spec: clusterNetwork: services: cidrBlocks: ['10.128.0.0/12'] pods: cidrBlocks: ['192.168.0.0/16'] serviceDomain: 'cluster.local' topology: class: hetzner-apalla-1-34-v6 version: v1.34.6 controlPlane: class: hetznerbaremetal // [!code ++] replicas: 3 // [!code ++] variables: // [!code ++] overrides: // [!code ++] - name: controlPlaneHostSelectorBareMetal // [!code ++] value: // [!code ++] matchLabels: // [!code ++] role: controlplane // [!code ++] cluster: example // [!code ++] workers: machineDeployments: - class: workeramd64hcloud name: md-0 replicas: 1 failureDomain: nbg1 variables: overrides: - name: workerMachineTypeHcloud value: cpx42 variables: - name: region value: nbg1 ``` Before creating a cluster with a baremetal control plane, ensure your [HetznerBareMetalHost](/docs/hetzner/apalla/how-to-guides/server-management/adding-baremetal-servers-to-your-cluster) resources are registered and labeled appropriately. For more information on using bare metal control planes, refer to [this](/docs/hetzner/apalla/how-to-guides/cluster-configuration/baremetal-control-planes) page. You can apply the manifest to the management cluster with: {% terminal height="6rem" steps="[{\"command\":\"kubectl apply -f cluster.yaml\",\"delay\":200,\"output\":\"cluster.cluster.x-k8s.io/example created\"}]" /%} Cluster creation will take a few more minutes. You can monitor the process by looking at the machine objects: {% terminal height="13rem" steps="[{\"command\":\"kubectl get machines\",\"delay\":500,\"output\":\"NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION\\nexample-jndgf-r4k7v example bm-example-12345678 hcloud://bm-12345678 Running 10m v1.34.6\\nexample-jndgf-sb6b8 example bm-example-87654321 hcloud://bm-87654321 Running 8m v1.34.6\\nbm-example-12348765 example example-hb2wb-bnkjh hcloud://bm-12348765 Running 5m v1.34.6\\nexample-md-0-zw8ln-pxztl-j4stb example example-md-0-bp4v2-7r2sv hcloud://3215678 Running 4m v1.34.6\"}]" /%} These represent actual machines and bare metal servers in your Hetzner account. If all of them are in the `Running` phase, it means your cluster is ready! ## Accessing your cluster To get the kubeconfig of your workload cluster, you can use the command: {% terminal height="8rem" steps="[\"kubectl get secrets example-kubeconfig -o=jsonpath='{.data.value}' \\\\\\n| base64 -d \\\\\\n> example-kubeconfig.yaml\"]" /%} Now, you can change the `KUBECONFIG` environment variable to point to your new cluster: {% terminal height="5rem" steps="[{\"command\":\"export KUBECONFIG=example-kubeconfig.yaml\",\"delay\":200,\"output\":\"\"}]" /%} If you want more information about workload and management clusters, check the [Management and Workload Clusters](/docs/hetzner/apalla/concepts/mgt-and-wl-clusters) section. When accessing your cluster, you may see some pods still pending. This is normal, you just have to wait for it to be completely initialized. When all pods are running and the four nodes are in ready state, it means your cluster is completely provisioned. And that's it! Now you have a production-ready, SCS-compatible Kubernetes cluster managed by Syself Autopilot.