There's a mountain of grpc-centric python code at $dayjob and it's been miserable to live with. Maybe it's less awful in c/c++, or at least confers some decent performance there. In python it's hot garbage.
I'm using it for a small-to-medium sized project, and the generated files aren't too bad to work with at that scale. The actual generation of the files is very awful for Python specifically, though, and I've had to write a script to bandaid fix them after they're generated. An issue has been open for this for years on the protobuf compiler repo, and it's basically a "wontfix" as Google doesn't need it fixed for their internal use. Which is... fine I guess.
The Go part I'm building has been much more solid in contrast.
It's equally painful in C, you have to wrap the C++ library :(
Can you say more about what the pain points are?
C++ generated code from protobuf/grpc is pretty awful in my experience.
Strongly agree, it’s has loads of problems, my least favourite being the schema is not checked in the way you might think, there’s not even a checksum to say this message and this version of the schema match. So when there’s old services/clients around and people haven’t versioned their schema’s safely (there was no mechanism for this apart from manually checking in PRs) you can get gibberish back for fields that should contain data. It’s basically just a binary blob with whatever schema the client has overlaid so debugging is an absolute pain. Unless you are Google scale use a text based format like JSON and save yourself a lot of hassle.