logoalt Hacker News

supriyo-biswasyesterday at 5:49 PM1 replyview on HN

Criticisms of Kubernetes generally come from a few places:

- People who would prefer their way of doing this, whether that's deployments on VMs, or use some sort of simpler cloud provider.

I had the same opinion a few years ago, but have kind of come to like it, because I can cleanly deploy multiple applications on a cluster in a declarative fashion. I still don't buy the "everything on K8s", and my personal setup is to have a set of VMs bought from a infrastructure provider, setup a primary/replica database on two of them, and use the rest as Kubernetes nodes.

- People who run Kubernetes at larger scales and have had issues with them.

This usually needs some custom scaling work; the best way to work around this if you're managing your own infra[1] is to split the cluster into many small independent clusters, akin to "cellular deployments"[2]/"bulkhead pattern"[3]. Alternatively, if you are at the point where you have a 500+ node cluster, it may not be a bad idea to start using a hyperscaler's service as they have typically done some of the scaling work for you, typically in form of replacing etcd and the RPC layer through something more stable.

- People who need a deep level of orchestration

Examples of such use cases may be to run a CI system or a container service like fly.io; for such use cases, I agree that K8s is often overkill, as you need to keep the two datastores in sync and generate huge loads on the kube-apiserver and the cluster datastore in the process, and it might be often better to just bring up Firecracker MicroVMs or similar yourself.

Although, I should say that teams writing their first orchestration process almost always run to Kubernetes without realizing this pitfall, though I have learned to keep my mouth shut as I started a small religious war recently at my current workplace by raising this exact point.

[1] Notice how I don't say "on-prem", because the hyperscaler marketing teams would rather have you believe in two extremes of either using their service or running around in a datacenter with racks, whereas you can often get bog-standard VMs from Hetzner or Vultr or DigitalOcean and build around that.

[2] https://docs.aws.amazon.com/wellarchitected/latest/reducing-...

[3] https://learn.microsoft.com/en-us/azure/architecture/pattern...


Replies

rfooyesterday at 5:55 PM

Another case: People who want to run workloads that are inherently incompatible with Kubernetes networking model.

For example:

* For some cursed reasons you want to make sure every single one instance of a large batch job see just one NIC in its container and they are all the same IP and you NAT to the outside world. Ingress? What ingress? This is a batch job!

* Like the previous point, except that your "batch job" somehow has multiple containers in one instance now, and they should be able to reach each other by domain.

show 2 replies