Given how slow protobufs and grpc is, I wonder if the socket transport would ever be the bottleneck to throughput here.
Changing transports means if you want to move your grpc server process to a different box you now have new runtime configuration to implement/support and new performance characteristics to test.
I can see some of the security benefits if you are running on one host, but I also don't buy the advantages highlighted at the end of the article about using many different OS's and language environments on a single host. Seems like enabling and micro-optimising chaos instead of trying to tame it.
Particularly in the ops demo: Statically linking a C++ grpc binary, and standardising on host OS and gcc-toolset, doesn't seem that hard. On the other hand, if you're using e.g. a python rpc server are you even going to be able to feel the impact of switching to vsock?
> Given how slow protobufs and grpc is, I wonder if the socket transport would ever be the bottleneck to throughput here.
I think this is supposed to be option for when you want to pass stuff to host quickly without writing another device driver or using other interface rather than replacement for any rpc between VMs. "Being fast" is just a bonus.
For example at our job we use serial port for the communication with VM agent (it's just passing some host info about where VM is running, so our automation system can pick it up), this would be ideal replacement for that.
And as it is "just a socket", stuff like this is pretty easy to setup https://libvirt.org/ssh-proxy.html