Skip to main content
All Case Studies

An Automated GPU Kubernetes Solution Fulfilling Highest Compliance Requirements for LanguageTool

Introduction

LanguageTooler GmbH is a Germany-based SaaS company maintaining the open-source project LanguageTool. LanguageTool is an AI-powered writing assistant that checks grammar, spelling, style, and punctuation in multiple languages to help improve the quality of written content. Millions of texts get improved by LanguageTool on a daily basis.

The Challenge

As LanguageTool grew, so did their infrastructure. They utilized multiple cloud providers, but the largest part consisted of self-managed Hetzner bare metal servers.

They used MicroK8s for running Kubernetes clusters on some of these servers, and Ansible to manage workload on servers outside of the clusters. However, this approach lacked scalability and reliability. Their 250 Hetzner bare metal servers equipped with GPUs created a significant management overhead that was difficult to handle in-house.

Additionally, they faced the natural consequence of a self-managed system: They were responsible for all maintenance, bug fixing, and firefighting. This proved to be a significant distraction to their core business. Instead of building the product, they had to maintain infrastructure.

LanguageTool is used by millions every day, creating the need for an extremely powerful, reliable, and scalable system. This is why they searched for a solution that would fulfill their high demands on cloud infrastructure, while reducing the maintenance work they have to do in-house.

The Solution

After considering multiple offerings on different public clouds, they finally chose Syself Autopilot. The main reason was that Syself followed a software-based approach that automates all relevant tasks around maintaining Kubernetes clusters.

Furthermore, it was possible to connect Syself Autopilot to LanguageTool's already existing Hetzner infrastructure. While profiting from Hetzner's affordable servers, they got a Kubernetes solution that is in the same league as the main players in the market.

Syself ensured that all GPU servers were tightly integrated into the natural lifecycle of the cluster. Through automation and optimized operations, LanguageTool doesn't have to invest much time maintaining their infrastructure.

When needed, experts are available for any issues that may arise. This lifts pressure off of their internal DevOps team so that they can focus more on their many application-related tasks.

LanguageTool has extremely high demands with regard to compliance. This is why they chose a dedicated management platform that runs on their own infrastructure. They don't share this platform with anyone and can fully control all access to it. Syself ships regular updates to keep the platform up to date.

Results

LanguageTool's Kubernetes clusters are run by a highly automated and reliable platform. They have 250 Hetzner bare metal servers equipped with GPUs in their clusters that receive millions of requests a day. The peak traffic is around 20,000 requests per second.

They managed to significantly reduce the overhead of maintaining their infrastructure and can fully concentrate on their core business now.

Kubernetes Transparent

Ready to Experience the Syself Difference?