4 minutes
Centralized Routing for Distributed Services With Traefik
As my homelab evolved from a single VM to a distributed cluster, I hit a problem: how do you manage routing for services spread across multiple hosts while keeping the simplicity of Docker labels?
The solution turned out to be a hub-and-spoke architecture with a centralized Traefik instance that dynamically provisions routes using Redis. This approach lets me keep critical services like Home Assistant isolated on their own VMs while maintaining the convenience of Docker-based configuration for everything else.
The Problem
When you’re running everything on a single host, Traefik’s Docker provider is perfect. Labels on your containers automatically create routes, and everything “just works.” But as soon as you distribute services across multiple VMs—whether for resource isolation, hardware requirements, or uptime concerns—you lose that seamless integration.
You could run separate Traefik instances on each host, but then you’re managing multiple entry points and certificates. Not ideal.
The Solution: Centralized Traefik with Redis
The architecture is straightforward: one central Traefik instance handles all incoming traffic and routes it to services across your infrastructure. Static routes (like that Home Assistant VM that doesn’t run Docker) are configured via file provider, while dynamic routes are managed through Redis using the same Docker labels you’re already familiar with.
Here’s how it breaks down:
- Central Traefik VM: Handles all incoming traffic, terminates SSL, manages certificates
- Redis: Acts as the dynamic configuration store
- Service VMs: Run your applications with standard Docker labels
- traefik-kop: Reads Docker labels and writes them to Redis
Setting Up the Central Traefik
First, create your central Traefik instance. I keep this in a ~/docker-compose
directory:
services:
traefik:
image: traefik:latest
container_name: traefik
restart: unless-stopped
user: 99:65534
ports:
- 80:80/tcp
- 443:443/tcp
volumes:
- ./config:/etc/traefik
environment:
CF_DNS_API_TOKEN: "<Cloudflare Token>"
redis:
image: redis:latest
restart: unless-stopped
ports:
- 6379:6379
The Redis instance is crucial here. It’s what enables the dynamic configuration from your distributed services.
Static Configuration
Your main Traefik config goes in ./config/traefik.yaml
. This is standard Traefik configuration with two providers: file and Redis.
serversTransport:
insecureSkipVerify: true
entryPoints:
http:
address: ":80"
http:
redirections:
entryPoint:
to: https
scheme: https
permanent: true
https:
address: :443
http:
tls:
certResolver: letsencrypt
domains:
- main: '<DOMAIN.tld>'
sans:
- '*.<DOMAIN.tld>'
accessLog: {}
log:
level: INFO
certificatesResolvers:
letsencrypt:
acme:
email: <EMAIL>
storage: /etc/traefik/acme.json
dnsChallenge:
provider: cloudflare
resolvers:
- "1.1.1.1:53"
- "1.0.0.1:53"
providers:
file:
filename: /etc/traefik/fileConfig.yaml
watch: true
redis:
endpoints:
- "redis:6379"
api:
dashboard: true
insecure: true
File-Based Configuration
For services that don’t use Docker (or where you want explicit control), use the file provider in ./config/fileConfig.yaml
:
http:
# Define middlewares here
middlewares:
local-ip-whitelist:
ipAllowList:
sourceRange:
- 10.128.128.0/24
- 10.128.10.0/24
- 10.130.0.0/16
# Define non-docker hosts here
routers:
homeassistant:
entryPoints:
- https
rule: "Host(`ha.<DOMAIN.tld`)"
service: homeassistant
priority: 6
# Define respective services here
services:
homeassistant:
loadbalancer:
servers:
- url: http://10.128.10.125:8123
scheme: http
This is where you’ll define shared middlewares (like IP whitelisting) and any static routes for non-Docker services.
Bring it up with docker compose up -d
and your central Traefik is ready.
Connecting Distributed Services
Installing traefik-kop
The magic happens with traefik-kop, a lightweight service that reads Docker labels and writes them to Redis. Install this on each VM that runs services you want to route through the central Traefik.
services:
traefik-kop:
image: "ghcr.io/jittering/traefik-kop:latest"
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
REDIS_ADDR: <CENTRAL_TRAEFIK>:6379
BIND_IP: <SELF_IP>
Replace <CENTRAL_TRAEFIK>
with your central Traefik’s IP and <SELF_IP>
with this host’s IP. The BIND_IP
is important—it tells traefik-kop what IP to use when creating service definitions in Redis.
Configuring Services
Here’s where it gets elegant: your service configuration doesn’t change. Use the same Traefik labels you’d use with a local Docker provider.
services:
jellyfin:
image: jellyfin/jellyfin
# ... other configuration ...
labels:
- traefik.enable=true
- traefik.http.routers.jellyfin.rule=Host(`jellyfin.popesco.io`)
- traefik.http.routers.jellyfin.middlewares=local-ip-whitelist@file
- traefik.http.services.jellyfin.loadbalancer.server.port=8096
The key difference is that instead of a local Traefik instance reading these labels, traefik-kop reads them and creates corresponding entries in Redis. The central Traefik then picks up these Redis entries and creates the routes.
Important Considerations
Network Connectivity: Your central Traefik needs direct access to service ports on distributed hosts. Unlike a local setup where Traefik can use Docker’s internal networking, here it needs to reach the actual exposed ports.
Middleware References: Middlewares defined in your file provider can be referenced from Redis-provisioned routes using the @file
suffix, as shown in the Jellyfin example above.
Service Naming: Be mindful of service names to avoid conflicts between different hosts. Consider prefixing with the hostname if needed.
Why this architecture
This architecture gives you the best of both worlds:
- Centralized management: One Traefik instance, one certificate store, one entry point
- Dynamic configuration: Services can be added/removed without touching the central config
- Familiar syntax: Standard Docker labels work exactly as expected
- Mixed environments: Static file-based routes work alongside dynamic Redis routes
The result is a scalable, maintainable setup that grows with your homelab while keeping the operational simplicity of single-host deployments.
804 Words
2025-06-06 13:33