> Using a full-featured RPC framework for IPC seems like overkill when the processes run on the same machine. However, if your project anyway exposes RPCs for public APIs or would benefit from a schema-based serialisation layer it makes sense to use only one tool that combines these—also for IPC.
It might make sense. Usually, if you're using IPC, you need it to be as fast as possible and there are several solutions that are much faster.
What are the other solutions that are much faster? (besides rolling your own mini format).
I'm not even sure that I'd say usually. Most of the time you're just saying "hey daemon, do this thing that you're already preconfigured for".
I tend to agree. Usually you want as fast as possible. Sometimes you don't though.
E.g. Kythe (kythe.io) was designed so that its individual language indexers run with a main driver binary written in Go, and then a subprocess binary written in.... whatever. There's a requirement to talk between the two, but it's not really a lot of traffic (relative to e.g. the CPU cost of the subprocess doing compilation).
So what happens in practice is that we used Stubby (like gRPC, except not public), because it was low overhead* to write the handler code for it on both ends, and got us some free other bits as well.
* Except when it wasn't lol. It worked great for the first N languages written in langs with good stubby support. But then weird shit (for Google) crawled out of the weeds that didn't have stubby support, so there's some handwaving going on for the long tail.