They have built an orchestrator, not Kubernetes. There is one key difference: they know this thing, end-to-end, down to every single bolt and piece of duct tape (with possible exception for Docker internals)
And that's a very important distinction when it comes to maintaining complex systems. This could've changed with LLMs (I'm still adjusting to what new capabilities mean for various decision-making logic), but before machine intelligence debugging an issue with Kubernetes could've been a whole world of pain.
And chances are only they know it. If my role has enough cluster access, I can muddle through pretty much any helm chart (with lots of cursing, yes) but it might take me days to set up whatever elaborate bespoke environment and script invocations are needed to replicate the current production setup maybe.