As a solo dev who just started his second cluster a few days ago... I like it.
Upfront costs a little higher than I'd like. I'm paying $24 for a droplet + $12 for a load balancer, plus maybe $1 for a volume.
I could probably run my current workload on a $12 droplet but apparently Cilium is a memory hog and that makes the smaller droplet infeasible, and it seems not practical to not run a load balancer.
But now I can run several distinct apps running different frameworks and versions of php, node, bun, nginx, whatever and spin them up and tear them down in minutes and I kind of love that. And if I ever get any significant amount of users I can press a button and scale up or horizontally.
I don't have to muck about with pm2 or supervisord or cronjobs, that's built in. I don't have to muck about with SSL certs/certbot, that's built in.
I have SSO across all my subdomains. That was a little annoying to get running, took a day and a half to figure out but it was a one time thing and the config is all committed in YAML so if I ever forget how it works I have something to reference instead of trying to remember 100 shell commands I randomly ran on a naked VPS.
Upgrades are easy. Can upgrade the distro or whatever package easily.
Downsides are deploys take a minute or two instead of sub-second.
It took weeks of tinkering to get a good DX going, but I've happily settled on DevSpace. Again it takes a couple minutes to start up and probably oodles of RAM instead of milliseconds but I can maintain 10 different projects without trying to keep my dev machine in sync with everything.
So some trade-offs but I've decided it's a net win after you're over the initial learning hump.
> I can run several distinct apps running different frameworks and versions > don't have to muck about with pm2 or supervisord or cronjobs, that's built in. I don't have to muck about with SSL certs/certbot
But doesn't literally any PaaS and provider with a "run a container" feature (AWS Fargate/ECS, etc) fit the bill without the complexity, moving parts and failure modes of K8s.
K8s makes sense when you need a control plane to orchestrate workloads on physical machines - its complexity and moving parts are somewhat justified there because that task is actually complex.
But to orchestrate VMs from a cloud provider - where the hypervisor and control plane already offers all of the above? Why take on the extra overhead by layering yet another orchestration layer on top?