We're happy to announce that the K3s developer VPS is now live on LumaDock!
It's a single-node Kubernetes cluster running on a VPS, pre-configured with the stack most developers end up assembling by hand anyway. Helm 3, Traefik, persistent volumes, metrics-server, kubeconfig generated on first boot. You SSH in, pull the config, run kubectl get nodes, and the cluster is yours. The whole setup takes about as long as making a coffee.
You get full root on the host, a certified Kubernetes API that behaves the way your production cluster does, and versions pinned so your manifests work identically across nodes ordered months apart.
Why we built it
Running Kubernetes as a solo developer or small team has been an awkward middle ground for a while. Minikube and kind work until you want real ingress, persistent state across reboots, or a cluster your teammates can reach.
Managed providers like EKS, GKE, and AKS work beautifully once you're running serious production, but the flat control plane fee (around $70-75/month before you add any workers) makes them a tough sell for staging environments, side projects, or Kubernetes learners.
We kept hearing from developers stuck between those two options. The K3s developer VPS fills that gap with a real kubectl API, reachable from your laptop, that survives reboots and behaves like a grown-up cluster. One node, yours, configured and ready.
What ships on every node
Every plan includes the same pre-installed stack with versions pinned for consistency:
- K3s v1.34.6, the certified Kubernetes distribution maintained by SUSE
- Helm v3.20.1, ready to add any chart repository
- Traefik as the default ingress controller for HTTP and HTTPS routing
- CoreDNS for cluster DNS
- local-path-provisioner for automatic PVC binding against local NVMe
- metrics-server so
kubectl top nodesandkubectl top podsreturn data from boot - Debian 12 (bookworm) as the host OS
Every tier runs on AMD EPYC compute with NVMe storage, unmetered bandwidth, a public IPv4, a 1 Gbps NIC, and a daily snapshot with 24h retention. You get full root access to the host, which means you can swap Traefik for ingress-nginx, modify K3s startup flags, install a different CNI, or add cert-manager on day one. The default install is a starting point, not a cage.
The control plane is included
Worth addressing up front because it's the first question that comes up.
On managed Kubernetes providers, the control plane fee runs independent of worker capacity. You pay around $70-75/month per cluster before your workloads have consumed any CPU. That pricing makes sense for production environments running dozens of services with serious availability requirements. It stops making sense for staging clusters, learning environments, and small projects.
In single-node K3s, the control plane runs on the same node as your workloads, which is how K3s was designed. Your plan price covers the full cluster. When your requirements grow past what one node can handle, a managed multi-master setup is the right next step and we'll happily point you there.
What developers are building on it
We ran a private beta ahead of launch. Seven use cases came up often enough to highlight.
Self-hosted n8n automation hub
Developers and small agencies running n8n on their own K3s node to automate client workflows: connecting CRMs, sending Slack alerts, syncing spreadsheets, triggering emails. Helm deploys n8n in a single command, persistent storage keeps workflows safe across restarts, and the data stays on infrastructure you control end-to-end. A popular first deployment for agencies handling automation work for their clients.
A staging environment that mirrors production
Solo developers and small teams running production on AWS or GCP, needing a staging environment that behaves identically. Same Kubernetes manifests, same ingress rules, same Helm charts. A K3s node gives you that real Kubernetes environment for testing at a fraction of what a managed staging cluster costs. Push to staging, push to production, identical kubectl on both ends.
SaaS MVP infrastructure
Founders and indie developers building a SaaS in its early stages, running the entire stack (API, database, background workers, Redis cache) on a single K3s node. Everything in containers, everything managed by Kubernetes, ready to migrate to a larger cluster when the traffic justifies it. No DevOps hire needed in the early stage, and no cloud provider lock-in when you do need to scale.
Portfolio and client project hosting for freelancers
Freelance developers hosting multiple client projects on one K3s node, using Traefik ingress to route different domains to different deployments. Each client gets their own namespace, their own persistent volume, their own subdomain. One node, clean separation, resource quotas keeping anyone from starving the others. Tidier than any shared hosting arrangement.
Learning Kubernetes properly
Developers preparing for CKA or CKAD certifications who want a real cluster to practice on rather than a Minikube environment that behaves differently from production Kubernetes. A K3s node gives you a genuine kubectl environment at a small fraction of what a managed service would cost during study prep. Ingress handling real traffic, PVCs persisting across reboots, metrics reflecting real resource pressure, DNS behaving as it will in production.
Self-hosted CI/CD runners
Developers and small teams running their CI/CD pipeline (Gitea Actions, Drone, Woodpecker, Tekton) on a K3s node. Build jobs run in isolated Kubernetes pods, artifacts stored on persistent volumes, the whole pipeline self-contained and private. Predictable monthly cost instead of per-minute billing that balloons during busy sprints.
Self-hosted uptime monitoring
Uptime Kuma deployed on K3s to monitor client sites and APIs. Because it runs on Kubernetes, a crashed pod restarts automatically. Persistent volume keeps monitoring history intact across restarts. Traefik routes the dashboard to your subdomain. One deployment, minimal maintenance.
Getting connected
Provisioning sends an email with the node IP, root credentials, and the commands you need. Three steps from welcome email to working kubectl.
1. SSH into your node
ssh [email protected]
2. Pull the kubeconfig to your local machine
scp [email protected]:/root/kubeconfig.yaml ~/.kube/config
Make sure ~/.kube/ exists locally first, or scp will complain.
3. Verify the cluster
kubectl get nodes
Node should appear with status Ready. From that point, your local kubectl is pointed at a live Kubernetes API. Helm repos add without issue, charts install, ingress resources route.
Smoke test in 30 seconds
If you want to verify the full Helm path end-to-end:
helm repo add bitnami https://charts.bitnami.com/bitnami helm repo update helm install my-redis bitnami/redis --set auth.enabled=false kubectl get pods
Pods running, Redis reachable, persistent volume claimed. If that works, everything works.
Recommended hardening on first login
We ship with password SSH enabled for the initial connection so getting in doesn't depend on key setup going right the first time. Before you do anything else, a few quick steps:
- Change the root password from the one in the welcome email.
- Add your SSH public key and disable password authentication.
- Restrict SSH via the firewall rules in the client area if your IP is static. Public IPv4s see constant bot traffic, and a few minutes of hardening upfront saves a lot of noise in
auth.loglater.
Scope and limitations
A single node is a single node. Multi-master HA, worker autoscaling, regional failover — those require a different architecture, and we'd point you toward a managed provider or a custom multi-node cluster on dedicated hardware for those requirements.
The K3s developer VPS is built for the scenarios where one node is genuinely enough, which covers more developer work than managed-cluster pricing tiers would suggest: learning, staging, side projects, MVPs, freelance hosting, CI/CD, internal tooling.

