logoalt Hacker News

Using gRPC for (local) inter-process communication (2021)

77 pointsby zardinalityyesterday at 1:42 PM69 commentsview on HN

Comments

pjmlpyesterday at 5:04 PM

> Using a full-featured RPC framework for IPC seems like overkill when the processes run on the same machine.

That is exactly what COM/WinRT, XPC, Android Binder, D-BUS are.

Naturally they have several optimisations for local execution.

show 3 replies
DashAnimalyesterday at 4:48 PM

What I loved about Fuchsia was its IPC interface, using FIDL which is like a more optimized version of protobufs.

https://fuchsia.dev/fuchsia-src/get-started/learn/fidl/fidl

show 1 reply
palatayesterday at 9:34 PM

I have been in a similar situation, and gRPC feels heavy. It comes with quite a few dependencies (nothing compared to npm or cargo systems routinely bringing hundreds of course, but enough to be annoying when you have to cross-compile them). Also at first it sounds like you will benefit from all the languages that protobuf supports, but in practice it's not that perfect: some python package may rely on the C++ implementation, and therefore you need to compile it for your specific platform. Some language implementations are just maintain by one person in their free time (a great person, but still), etc.

On the other hand, I really like the design of Cap'n Proto, and the library is more lightweight (and hence easier) to compile. But there, it is not clear on which language implementation you can rely other than C++. Also it feels like there are maintainers paid by Google for gRPC, and for Cap'n Proto it's not so clear: it feels like it's essentially Cloudflare employees improving Cap'n Proto for Cloudflare. So if it works perfectly for your use-case, that's great, but I wouldn't expect much support.

All that to say: my preferred choice for that would technically be Cap'n Proto, but I wouldn't dare making my company depend on it. Whereas nobody can fire me for depending on Google, I suppose.

show 2 replies
zackangeloyesterday at 5:10 PM

Had to reach for a new IPC mechanism recently to implement a multi-GPU LLM inference server.

My original implementation just pinned one GPU to its own thread then used message passing between them in the same process but Nvidia's NCCL library hates this for reasons I haven't fully figured out yet.

I considered gRPC for IPC since I was already using it for the server's API but dismissed it because it was an order of magnitude slower and I didn't want to drag async into the child PIDs.

Serializing the tensors between processes and using the Servo team's ipc-channel crate[0] has worked surprisingly well. If you're using Rust and need a drop-in (ish) replacement for the standard library's channels, give it a shot.

[0] https://github.com/servo/ipc-channel

HumblyTossedyesterday at 4:06 PM

> Using a full-featured RPC framework for IPC seems like overkill when the processes run on the same machine. However, if your project anyway exposes RPCs for public APIs or would benefit from a schema-based serialisation layer it makes sense to use only one tool that combines these—also for IPC.

It might make sense. Usually, if you're using IPC, you need it to be as fast as possible and there are several solutions that are much faster.

show 3 replies
anilakaryesterday at 5:31 PM

In addition to cloud connectivity, we've been using MQTT for IPC in our Linux IIoT gateways and touchscreen terminals and honestly it's been one of the better architectural decisions we've made. Implementing new components for specific customer use cases could not be easier and the component can be easily placed on the hardware or on cloud servers wherever it fits best.

I don't see how gRPC could be any worse than that.

(The previous iteration before MQTT used HTTP polling and callbacks worked on top of an SSH reverse tunnel abomination. Using MQTT for IPC was kind of an afterthought. The SSH Cthulhu is still in use for everyday remote management because you cannot do Ansible over MQTT, but we're slowly replacing it with Wireguard. I gotta admit that out of all VPN technologies we've experimented with, SSH transport has been the most reliable one in various hostile firewalled environments.)

show 1 reply
lyu07282yesterday at 9:36 PM

Cool that they mention buf, it's such a massive improvement over Google's own half abandoned crappy protobuf implementation

https://github.com/bufbuild/buf

jauntywundrkindyesterday at 5:04 PM

I super dug the talk Building SpiceDB: A gRPC-First Database - Jimmy Zelinskie, authzed which is about a high-performance auth system, which talks to this. https://youtu.be/1PiknT36218

It's a 4-tier arhcitecture (clients - front end service - query service - database) auth system, and all communication is over grpc (except to the database). Jimmy talks about the advantages of having a very clear contract between systems.

There's a ton of really great nitty gritty detail about being super fast with gRPC. https://github.com/planetscale/vtprotobuf for statical-size allocating protobuf rather than slow reflection-based dynamic size. Upcoming memory pooling work to avoid allocations at all. Tons of advantages for observability right out of the box. It's subtle but I also get the impression most gRPC stubs are miserably bad, that Authzed had to go long and far to get away from a lot of gRPC tarpits.

This is one of my favorite talks from 2024, and strongly sold me.on how viable gRPC is for internal services. Even if I were doing local multi-process stuff, I would definitely consider gRPC after this talk. The structure & clarity & observability are huge wins, and the performance can be really good if you need it.

https://youtu.be/1PiknT36218#t=12m 12min is the internal cluster details.

show 1 reply
justinsaccountyesterday at 8:21 PM

> In our scenario of local IPC, some obvious tuning options exist: data is exchanged via a Unix domain socket (unix:// address) instead of a TCP socket

AFAIK at least on linux there is no difference between using a UDS and a tcp socket connected to localhost.

show 1 reply
pengaruyesterday at 4:34 PM

There's a mountain of grpc-centric python code at $dayjob and it's been miserable to live with. Maybe it's less awful in c/c++, or at least confers some decent performance there. In python it's hot garbage.

show 5 replies
jeffbeeyesterday at 3:59 PM

Interesting that it is taken on faith that unix sockets are faster than inet sockets.

show 5 replies