logoalt Hacker News

K3k: Kubernetes in Kubernetes

73 pointsby jzebedeetoday at 4:00 AM40 commentsview on HN

Comments

enrichmantoday at 8:21 AM

Hi everyone! I’m one of the maintainers of K3k at SUSE.

It’s really exciting to see this on the front page. The project actually started during a SUSE Hackweek by my colleague Hussein. It was initially envisioned as a "Kubernetes version of k3d," but it evolved into something more ambitious and eventually became a real product. We’ve always been big believers in the power of open source. For the current default "shared" mode, we even experimented with Virtual Kubelet, another CNCF project, during our development process.

I’ll be hanging around the thread today, so if you have any questions about the history, the tech stack, or where we're headed next, feel free to ask!

matt123456789today at 6:43 AM

This is, if I had to guess, a monument to a small team's stubborn insistence that such a thing could be done at all. If I can hope for a reward for them, may it be that they are allowed to hand off maintaining it to another team.

kitdtoday at 6:08 AM

Missed the opportunity to call it Kink ...

show 3 replies
redrovetoday at 5:58 AM

So this is basically vCluster[0] but Rancher branded?

[0] https://github.com/loft-sh/vcluster

show 2 replies
ohneitoday at 11:33 AM

It doesn't seem like it is at a deep layer such that it could be used to test updates to kubernetes and CRDs in a cluster that isn't yet updated?

randomtoasttoday at 8:08 AM

This type of approach carries a significantly higher operational risk compared to operating multiple Kubernetes clusters on separate VMs or physical hardware. If you eventually update the main Kubernetes cluster that manages the virtual clusters and something goes wrong, you could potentially bring down your entire fleet of Kubernetes clusters all at once.

show 1 reply
nonameiguesstoday at 8:33 AM

Hacker News sure does love posting links to random Github repos with no context for why it was posted, then a bunch of comments come along and basically ask why.

Since I do have context, the original Rancher labs CTO created k3s, one of the earliest severely stripped down versions of Kubernetes, which bundles all of the required executables into a single multi-call binary, in order to be able to run Kubernetes on a Raspberry Pi. Along the lines of kind, k3d was released to be able to run k3s in Docker containers instead of full Linux hosts. The main use case is testing. We used it extensivel in the early days of Air Force and IC cloud migrations that insisted we needed to rehost all systems in Kubernetes so developers could have local targets to work with. Rancher eventually rebuilt its Kubernetes engine when Docker fell out of favor and based rke2 on k3s, but with the Kubernetes components as static pods instead of embedded multi-call binaries and kubelet and containerd extracted from an embedded virtual filesystem to the host when rke2 is first run.

When KubeVirt came out, Rancher also released an HCI product that uses it, Harvester, running on top of rke2 and Rancher's storage project Longhorn. This runs a full virtual machine manager with virtualized networking and storage, a la something like ESXI, vSAN, and vSphere, with Multus and the bridge CNI plugin providing the networking (it now has KubeOVN as well).

Harvester relies on being imported to and managed by Rancher to have things like SSO and Rancher's multi-cluster RBAC and node provisioners for Harvester to run guest clusters. A whole lot of customers migrating off of VMWare since the Broadcom acquisition want all of that, but without necessarily having an external Rancher. Early on, Harvester offered an experimental vCluster addon that created a guest cluster with Rancher installed on it and that automatically managed Harvester.

This had a lot of problems. I'm not going to rehash them because I don't want to come across as bashing vCluster, but it was not a tenable long-term option that crashed hard on most who tried to use it. Since Rancher already had k3d, it was pretty natural step to just create their own virtualized Kubernetes that runs in Kubernetes by adapting k3d to become k3k, which runs k3s in Kubernetes rather than in Docker. Now you can get a guest cluster to install Rancher onto and get the full suite of Rancher features and a much better experience than the bare Harvester UI without needing to run full VMs.

Why not just install Rancher directly onto the same rke2 cluster that is running Harvester itself? Because it already has one, but that was considered an implementation detail that developers used to bootstrap and not have to duplicate work that was already done, but not meant to be exposed to users. If you try to install a second Rancher to actually use, you'll conflict with a whole bunch of resources that already exist and it won't work.

It's a tangled mess of confusing layers, but that's the world we live in. It's why we still have IPv4, VLAN, VXLAN, virtual terminals, discretionary access control for Linux. We build on top of what is already there instead of rebuilding from scratch in a saner way. This isn't just how software works. It's why city designs rarely make sense. It's why life itself has vestigial anti-features. Cruft rarely disappears. It just gets buried underneath whatever comes next.

rjzzleeptoday at 6:21 AM

Do Rancher side products generally make it into a stable state such that you would want to run mission-critical systems on?

show 2 replies
weitzjtoday at 7:07 AM

I don’t understand how they are separating security in the virtual mode as they only mention pods. It seems every workload still shares the underlying node, even when in virtual mode. Take for example the OCI cache on the nodes. What about cache poisoning?

show 2 replies
bloppetoday at 6:31 AM

What does k3k stand for? Can we just put whatever number we want between 2 letters now?

show 4 replies
freakynittoday at 9:20 AM

Can we go deeper than two level? (inception vibes..)

madducitoday at 6:19 AM

Nice, now we need K3Kind

aiman_alsaritoday at 5:40 AM

[flagged]

user432678today at 9:49 AM

[flagged]

2ndorderthoughttoday at 10:20 AM

Can someone explain what this even means? Explain it like I am a software engineer with 20 years experience who has not yet found a strong use case for running kubernetes outside of hand holding cloud provider options

show 3 replies