Local storage offers faster access speeds than alternatives because data is stored directly on the node. And you can combine that with the benefits of bare-metal machines to make your storage surprisingly cost-effective.
Normally, this would mean a complex setup with higher maintenance costs. But with Syself Autopilot, we take that burden off your shoulders by simplifying every step. We enable you to persist data through cluster updates or even when you need to re-provision the machines, making it ideal for storage-intensive workloads such as databases.
This guide will walk you through the process of configuring your cluster and machines to use local storage with TopoLVM, a one-time process that allows you to efficiently use storage attached directly to your servers and rely on Autopilot for lifecycle automation.
Most of the procedure described in this guide will be automated in the upcoming release of Syself Autopilot. Please stick to the configuration options described here to guarantee compatibility.
You need to have cert-manager
version v1.7.0
or higher installed on your cluster as a dependency of TopoLVM. Install it with the following command:
You can follow the official guide on installing helm
if you don't have it.
To guarantee compatibility with future Autopilot releases, don't use helm install
, as our automation will create the resources directly instead of installing charts.
We use the TopoLVM CSI driver for local storage on bare-metal. You can follow the steps below to deploy it to your workload cluster.
Add the Syself helm repository:
Template the TopoLVM chart and apply it to the cluster:
Do not use helm install
. This chart will be added as a base feature in later versions of Syself Autopilot, so stick to the installation steps shown here to guarantee compatibility. Use the command below to install it with helm template
and kubectl apply
to the kube-system
namespace.
Now the storage space in your bare-metal server is exposed to your cluster via the local-nvme
Storage Class:
In this step, you'll define the physical volumes and volume groups in your disks to be used by TopoLVM.
We support all three types of disks: HDD, SATA SSD, and NVMe SSD. If your server, for example, only has NVMe, you should only follow the steps for NVMe and cannot use the storage classes local-ssd
or local-hdd
.
If your server has NVMe, SSD, and HDD, you can use all three storage classes.
Access your server via ssh:
List your disks with lsblk
:
Identify if the disk(s) you want to use is an HDD, SATA SSD, or NVMe SSD.
Don't use your OS disk ( nvme0n1
in the above output), as this can lead to data loss.
To identify the type of disk you have, you can look at the first column NAME
and third column RM
of the lsblk
output.
nvme
at the beginning of its name, otherwise: RM
column. RM
column. Create a physical volume (point to every disk in your server where you want to store data) with pvcreate /dev/[disk-name]
. For example:
Map the disks to the appropriate volume group type with vgcreate vg-[type] /dev/[disk-name] /dev/[other-disk]
. For example:
If you missed a disk and want to add it later, extend the volume group with vgextend vg-[type] /dev/[new-disk]
. For example:
vg-nvme
vg-ssd
vg-hdd
Repeat the previous steps for every disk you want to use.
After adding all disks to their respective volume groups, create a thin provisioned logical volume for each volume group. This step is done once per volume group, not per disk: Create a thin provisioned logical volume with lvcreate --thinpool pool-[type] --extents 100%FREE vg-[type]
.
You can use the newly created Storage Classes in the same way you would use any other.
By default, you have Storage Classes for all three disk types available in your cluster:
local-nvme
local-ssd
local-hdd
If you use a Storage Class for a disk type unavailable in your machine volume groups, your workload will be stuck at provisioning. We include all three to make your cluster ready for any new disks you might add in the future.
For a simple test, create a storage-test.yaml
file with the following content:
And apply it with kubectl apply -f storage-test.yaml
.
Now, all the data stored in the container under /usr/share/nginx/html
will be in /mnt/data
on your machine.
In this example, we used the local-nvme
class, but you can also use local-hdd
and local-ssd
too if your servers have disks of those types attached.
Create a test file in your pod:
Now ssh into your server again and see: