I love the concept of gVisor; it's surprising to me that it hasn't seemingly gotten more real world traction— even GHA is booting you a fresh machine for every build when probably 80%+ of them could run just fine in a gVisor sandbox.
I'd be curious to hear from someone at Google if gVisor gets a ton of internal use there, or it really was built mainly for GCP/GKE
gvisor is difficult to implement in practice. it a syscall proxy rather than a virtualization mechanism (even thus it does have kvm calls).
This causes a few issues: - the proxying can be slightly slower - its not a vm, so you cannot use things such as confidential compute (memory encryption) - you can't instrument all syscalls, actually (most work, but there's a few edges cases where it wont and a vm will work just fine)
On the flip side, some potential kernel vulnerabilities will be blocked by gvisor, while it wont in a vm (where it wouldnt be a hypervisor escape, but you'd be able to run code as the kernel).
This is to say: there are some good use cases for gVisor, and there's less of these than for (micro) vms in general.
Google developed both gVisor and crosvm (firecracker and others are based on it) and uses both in different products.
AFAIK, there isn't a ton of gVisor use internally if its not already in the product, though some use it in Borg (they have a "sandbox multiplexer" called vanadium where you can pick and choose your isolation mechanism)
We used gvisor in Kythe (semantic indexer for the monorepo). Like for the guts of running it on borg, not the open source indexers part.
For indexing most languages, we didn't need it, because they were pretty well supported on borg stack with all the Google internals. But Kythe indexes 45 different languages, and so inevitably we ran into problems with some of them. I think it was the newer python indexer?
> really was mainly for GCP/GKE
I mean... I don't know. That could also be true. There's a whole giant pile of internal software at Google that starts out as "built for <XYZ>, but then it gets traction and starts being used in a ton of other unrelated places. It's part of the glory of the monorepo - visibility into tooling is good, and reusability is pretty easy (and performant), because everyone is on the same build system, etc.
Google Cloud Functions and Cloud Run both started as gVisor sandboxes and now have "gen2" runtimes that boot a full VM.
Poor I/O performance and a couple of missing syscalls made it hard to predict how your app was going to behave before you deployed it.
Another example of a switch like this is WSL 1 to WSL 2 on Windows.
It seems like unless you have a niche use case, it's hard to truly replicate a full Linux kernel.