Skip to content

Add tutorial: Deploy Libre WebUI with Ollama on a Hetzner GPU Server#1369

Open
rob-x-ai wants to merge 2 commits intohetzneronline:masterfrom
rob-x-ai:deploy-libre-webui-ollama-docker-gpu
Open

Add tutorial: Deploy Libre WebUI with Ollama on a Hetzner GPU Server#1369
rob-x-ai wants to merge 2 commits intohetzneronline:masterfrom
rob-x-ai:deploy-libre-webui-ollama-docker-gpu

Conversation

@rob-x-ai
Copy link

New Tutorial

Title: Deploy a Private AI Chat Interface with Libre WebUI and Ollama on a GPU Server

Summary:
Step-by-step guide for deploying Libre WebUI with Ollama on Hetzner GPU dedicated servers (GEX44 / GEX131) using Docker and the NVIDIA Container Toolkit for CUDA acceleration.

Topics covered:

  • NVIDIA driver and CUDA setup on Ubuntu 24.04
  • Docker Engine and NVIDIA Container Toolkit installation
  • Docker Compose deployment with GPU passthrough
  • Model selection and VRAM/RAM offloading
  • Optional HTTPS with Caddy reverse proxy

Why this tutorial?
Libre WebUI is an Apache 2.0 licensed, privacy-first AI chat interface with AES-256-GCM encrypted storage and zero telemetry. Combined with Hetzner's EU-based GPU servers, it provides a fully GDPR-compliant self-hosted AI solution.

All commands have been tested and verified on a local Docker + NVIDIA GPU setup.

Step-by-step guide for deploying Libre WebUI with Ollama on Hetzner
GPU dedicated servers (GEX44/GEX131) using Docker and the NVIDIA
Container Toolkit for CUDA acceleration.
@svenja11 svenja11 added the review wanted Request a review label Feb 3, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

review wanted Request a review

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants