Using local storage in bare metal

Introduction

Local storage offers faster access speeds than alternatives because data is stored directly on the node. And you can combine that with the benefits of bare-metal machines to make your storage surprisingly cost-effective.

Normally, this would mean a complex setup with higher maintenance costs. But with Syself Autopilot, we take that burden off your shoulders by simplifying every step. We enable you to persist data through cluster updates or even when you need to re-provision the machines, making it ideal for storage-intensive workloads such as databases.

This guide will walk you through the process of configuring your cluster and machines to use local storage with TopoLVM, a one-time process that allows you to efficiently use storage attached directly to your servers and rely on Autopilot for lifecycle automation.

warning

Most of the procedure described in this guide will be automated in the upcoming release of Syself Autopilot. Please stick to the configuration options described here to guarantee compatibility.

1. Deploy cert-manager

You need to have cert-manager version v1.7.0 or higher installed on your cluster as a dependency of TopoLVM. Install it with the following command:

You can follow the official guide on installing helm if you don't have it.

warning

To guarantee compatibility with future Autopilot releases, don't use helm install , as our automation will create the resources directly instead of installing charts.

2. Deploy TopoLVM

We use the TopoLVM CSI driver for local storage on bare-metal. You can follow the steps below to deploy it to your workload cluster.

  1. Add the Syself helm repository:

  2. Template the TopoLVM chart and apply it to the cluster:

    warning

    Do not use helm install . This chart will be added as a base feature in later versions of Syself Autopilot, so stick to the installation steps shown here to guarantee compatibility. Use the command below to install it with helm template and kubectl apply to the kube-system namespace.

    Now the storage space in your bare-metal server is exposed to your cluster via the local-nvme Storage Class:

3. Configure your servers

In this step, you'll define the physical volumes and volume groups in your disks to be used by TopoLVM.

note

We support all three types of disks: HDD, SATA SSD, and NVMe SSD. If your server, for example, only has NVMe, you should only follow the steps for NVMe and cannot use the storage classes local-ssd or local-hdd .

If your server has NVMe, SSD, and HDD, you can use all three storage classes.

  1. Access your server via ssh:

  2. List your disks with lsblk :

  3. Identify if the disk(s) you want to use is an HDD, SATA SSD, or NVMe SSD.

    warning

    Don't use your OS disk ( nvme0n1 in the above output), as this can lead to data loss.

    tip

    To identify the type of disk you have, you can look at the first column NAME and third column RM of the lsblk output.

    • An NVMe disk will have nvme at the beginning of its name, otherwise:
    • A SATA SSD disk will have the value 0 in the RM column.
    • A HDD disk will have the value 1 in the RM column.
  4. Create a physical volume (point to every disk in your server where you want to store data) with pvcreate /dev/[disk-name] . For example:

  5. Map the disks to the appropriate volume group type with vgcreate vg-[type] /dev/[disk-name] /dev/[other-disk] . For example:

    If you missed a disk and want to add it later, extend the volume group with vgextend vg-[type] /dev/[new-disk] . For example:

    Available volume group types
    • For NVMe disks you use: vg-nvme
    • For SATA SSD disks you use: vg-ssd
    • For HDD disks you use: vg-hdd
  6. Repeat the previous steps for every disk you want to use.

  7. After adding all disks to their respective volume groups, create a thin provisioned logical volume for each volume group. This step is done once per volume group, not per disk: Create a thin provisioned logical volume with lvcreate --thinpool pool-[type] --extents 100%FREE vg-[type] .

    4. Use it!

    You can use the newly created Storage Classes in the same way you would use any other.

    Available storage classes

    By default, you have Storage Classes for all three disk types available in your cluster:

    • For NVMe disks: local-nvme
    • For SATA SSD disks: local-ssd
    • For HDD disks: local-hdd

    If you use a Storage Class for a disk type unavailable in your machine volume groups, your workload will be stuck at provisioning. We include all three to make your cluster ready for any new disks you might add in the future.

    1. For a simple test, create a storage-test.yaml file with the following content:

      storage-test.yaml yaml
      apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pv-claim spec: storageClassName: local-nvme accessModes: - ReadWriteOnce resources: requests: storage: 1Gi --- apiVersion: v1 kind: Pod metadata: name: pv-pod spec: volumes: - name: pv-storage persistentVolumeClaim: claimName: pv-claim containers: - name: pv-container image: nginx ports: - containerPort: 80 name: http-server volumeMounts: - mountPath: /usr/share/nginx/html name: pv-storage
    2. And apply it with kubectl apply -f storage-test.yaml .

      Now, all the data stored in the container under /usr/share/nginx/html will be in /mnt/data on your machine.

      note

      In this example, we used the local-nvme class, but you can also use local-hdd and local-ssd too if your servers have disks of those types attached.

    3. Create a test file in your pod:

    4. Now ssh into your server again and see:

    Previous
    Using HCloud storage
    Next
    Introduction to GitOps