They have built an orchestrator, not Kubernetes. There is one key difference: they know this thing, end-to-end, down to every single bolt and piece of duct tape (with possible exception for Docker internals)
And that's a very important distinction when it comes to maintaining complex systems. This could've changed with LLMs (I'm still adjusting to what new capabilities mean for various decision-making logic), but before machine intelligence debugging an issue with Kubernetes could've been a whole world of pain.
IMO, Kubernetes isn't inevitable, and this seems to paint it as such.
K8s is well suited to dynamically scaling a SaaS product delivered over the web. When you get outside this scenario - for example, on-prem or single node "clusters" that are running K8s just for API compatibility, it seems like either overkill or a bad choice. Even when cloud deployed, K8s mostly functions as a batteries-not-included wrapper around the underlying cloud provider services and APIs.
There are also folks who understand the innards of K8s very well that have legitimate criticisms of it - for example, this one from the MetalLB developer: https://blog.dave.tf/post/new-kubernetes/
Before you deploy something, actually understand what the pros/cons are, and what problem it was made to solve, and if your problem isn't at least mostly a match, keep looking.
The saddest part about Kubernetes is… after you set it all up, you still need a hacky deploy.sh to sed in the image tag to deploy! And pretty soon you’re back to “my dear friend you have built a Helm”. And so the configuration clock continues ticking…
As someone rolling their self-hosted stuff via Compose and shell scripts instead of K8s specifically for the simplicity of the experience, this is 100% why you need to understand what Kubernetes solves before writing it off entirely.
I'm not doing overlay networks, I'm using a single bare-metal host, and I value the hands-on Linux administration experience versus the K8s cluster admin experience. All of these are reasons I specifically chose not to use Kubernetes.
The second I want HA, or want to shift from local VLANs to multi-cloud overlays, or I don't need the local Linux sysadmin experience anymore? Yeah, it's K8s at the top of the list. Until then, my solution works for exactly what I need.
All it would take to make this post actually good would be to replace "Kubernetes" with "orchestrator"; that would also keep the symmetry with the post it's riffing on, about building compilers (it's not "Dear friend you have built a GHC").
PREACH!
I run K8s at home. I used to do docker-compose - and I'd still recommend that to most people - but even for my 1 little NUC with 4vcpu / 16Gi Homelab, I still love deploying with K8s. It's genuinely simpler for me.
If anyone's looking for inspiration, my setup:
* ArgoCD pointed to my GitLab repos
* GitLab repos contain Helm charts
* Most of the Helm charts contain open-source charts as subcharts, with versions set like (e.g.) `version: ~0` - meaning I automatically receive updates for all major version until `1`
* Updating my apps usually consists of logging into the UI, reviewing the infrastructure and image tag updates, and manually clicking sync. I do this once every few months
My next little side project: Autoscaling into the cloud (via a secure WireGuard tunnel) when I want to expand past my current hardware limitations
I can tell you how vendors deliver a software solution that runs on Kubernetes: very poorly.
The needed tweaks, the ability to customize things, basically goes to zero because the support staff is technical about the software, but NOT about Kubernetes.
I am not joking: a recent deployment required 3x VMs for Kubernetes, each VM having 256 gigabytes of RAM; then a separate 3x VMs for a different piece. 1.5TB of RAM to manage less than 1200 network devices (routers etc. that run BGP).
No one knew, for instance, how to lower the MongoDB (because of course you need it!) resource usage, despite the fact that the clustered VMware install is using a very fast SSD storage solution and thus MongoDB is unlikely accelerate anything; so over 128GB RAM is being burned on caching the results coming back from SSDs that are running at many-GB/s throughput.
Kubernetes is a powerful tool for complicated problems. If if it seems complicated, you probably don’t have a complicated deployment problem.
But really this applies to any powerful tool. If you need to measure a voltage, an 4 channel oscilloscope also probably seems too complicated.
Literally just finished building a personal orchestrator system I wanted and had this very much in back of my mind.
Ended up doing a mix. Built on compose for now but in a manner that’ll lift and shift to k8s easily enough. Its containers talking over network either way
Discussed at the time:
Dear friend, you have built a Kubernetes - https://news.ycombinator.com/item?id=42226005 - Nov 2024 (277 comments)
Found this the same day I published this: https://github.com/oddur/yoink
Kubernetes was overkill (I do that all day, 5 days a week); Kamal was too restrictive, so I found myself rolling out Yoink. Just what I need from k8s, but simple enough I can point it to a baremetal machine on Hertzner that can easily run all my workloads.
I somehow feel that it's actually the opposite. It should be "Dear kubernetes user, you have just built a shell".
I've experienced something like this at work but with data warehouse instead, and it happened multiple times (to be fair, data engineering is still fairly new where I'm from).
One example was an engineer wanted to build an API that accepts large CSV (GBs of credit reports) to extract some data and perform some aggregations. He was in the process of discussing with SREs on the best way to process the huge CSV file without using k8s stateful set, and the solution he was about to build was basically writing to S3 and having a worker asynchronously load and process the CSV in chunks, then finally writing the aggregation to db.
I stepped in and told him he was about to build a data warehouse. :P
Unless you’re in Erlang world (Elixir, Gleam..) and all that is already baked into OTP and the BEAM. You can go on holiday knowing it will be a while longer before you need to break out the pods (and at that scale, you will be able to afford a colleague or two to help you).
Criticisms of Kubernetes generally come from a few places:
- People who would prefer their way of doing this, whether that's deployments on VMs, or use some sort of simpler cloud provider.
I had the same opinion a few years ago, but have kind of come to like it, because I can cleanly deploy multiple applications on a cluster in a declarative fashion. I still don't buy the "everything on K8s", and my personal setup is to have a set of VMs bought from a infrastructure provider, setup a primary/replica database on two of them, and use the rest as Kubernetes nodes.
- People who run Kubernetes at larger scales and have had issues with them.
This usually needs some custom scaling work; the best way to work around this if you're managing your own infra[1] is to split the cluster into many small independent clusters, akin to "cellular deployments"[2]/"bulkhead pattern"[3]. Alternatively, if you are at the point where you have a 500+ node cluster, it may not be a bad idea to start using a hyperscaler's service as they have typically done some of the scaling work for you, typically in form of replacing etcd and the RPC layer through something more stable.
- People who need a deep level of orchestration
Examples of such use cases may be to run a CI system or a container service like fly.io; for such use cases, I agree that K8s is often overkill, as you need to keep the two datastores in sync and generate huge loads on the kube-apiserver and the cluster datastore in the process, and it might be often better to just bring up Firecracker MicroVMs or similar yourself.
Although, I should say that teams writing their first orchestration process almost always run to Kubernetes without realizing this pitfall, though I have learned to keep my mouth shut as I started a small religious war recently at my current workplace by raising this exact point.
[1] Notice how I don't say "on-prem", because the hyperscaler marketing teams would rather have you believe in two extremes of either using their service or running around in a datacenter with racks, whereas you can often get bog-standard VMs from Hetzner or Vultr or DigitalOcean and build around that.
[2] https://docs.aws.amazon.com/wellarchitected/latest/reducing-...
[3] https://learn.microsoft.com/en-us/azure/architecture/pattern...
Why both posts mention docker compose and not mentioning docker swarm. Being using it for my projects for long time. And it's so nice. Similar syntax, easy networking, rollout strategy, easy to add nodes to cluster.
You can have one template docker-compose.yaml file and separate deployment files for different envs, like: docker-compose.dev.yaml, docker-compose.prod.yaml
I think swarm is really underrated
I need clarifications.
I see docker as a way to avoid having a standard dev platform for everyone in the company so that the infra team don't have to worry about patch xyz for library abc, only run docker.
But, with all the effort put in place to coordinate docker, k8s and all the shebang, isn't it finally easier to force a platform and let it slowly evolve over time?
Is docker another technical tool that tries to solve a non-technical problem?
After reading this and remembering an old hobby project, I decided to switch the deploy from a systemd service to PM2, which apparently has rolling deployments without needing Docker engine (for those of us minmaxing instance RAM).
im just about giving OPs premise another go. compose just feels so much better as abstraction especially with small and medium setups looking close to the optimum of expressiveness without boilerplate to describe what is needed. The missing pieces seem to also be in the compose compatible “docker stack” aka new docker swarm, which i ignored for probably too long as i assumed it was the discontinued old swarm. Even if new swarm mode sucks how hard can it be to make something compose shaped vs running k8?
NOOO you have to use my shitpile of nested yaml with the same dependency sprawl cancer as modern javascript. You can't just upload a binary to your own servers and host it there you need to overthink everything and make an extremely simple process overcomplicated just install one more side car and fifty more dependencies on your helm chart bro and then we can move on to figuring out CSI it should only take like a month to get it working properly I promise!!!!!
I'll just say the quiet part out loud: A pile of shell scripts that no one else understands is job security.
If your work is easily googleable/parseable by an AI, why would anyone pay you?
Some days, it would be better to build a working not-kubernetes than debugging the not-working kubernetes.
> I know you wanted to "choose boring tech" to just run some containers.
The people advocating for boring tech generally aren't interested in containers.
You can just run programs.
See also Greenspuns tenth law:
> Any sufficiently complicated C or Fortran program contains an ad-hoc, informally-specified, bug-ridden, slow implementation of half of CommonLisp.
> Ah, but wait! Inevitably, you find a reason to expand to a second server
>> The real problem is that programmers have spent far too much time worrying about efficiency in the wrong places and at the wrong times; premature optimization is the root of all evil (or at least most of it) in programming.
-- Donald Knuth, Computer Programming as an Art (1974)
EDIT:
> Except if you quit or go on vacation, who will maintain this custom pile of shell scripts?
Honestly? I don't care. There is a reason why I quit and 99% of time it's the pay. And if the company doesn't pay me enough to bother then why should I? Why should I bother about some company future in the first place?
"Except if you quit or go on vacation, who will maintain this custom pile of shell scripts?" LLMs can reason about and fix them quite well.
This is obviously slightly exaggerated, but I do feel like this whenever people dismiss Kubernetes as either too complicated or not needed.
The response I always got when suggesting Kubernetes is "you can do all those things without Kubernetes"
Sure, of course. There are a million different ways to do everything Kubernetes does, and some of them might be simpler or fit your use case more perfectly. You can make different decisions for each choice Kubernetes makes, and maybe your decisions are more perfect for your workload.
However, the big win with Kubernetes is that all of those choices have been made and agreed upon, and now you have an entire ecosystem of tools, expertise, blog posts, AI knowledge, etc, that knows the choices Kubernetes made and can interface with that. This is VERY powerful.