Just for fun, how often do regular-sized companies that deal in regular-sized traffic need Protobuf to accomplish their goals in the first place, compared to JSON or even XML with basic string marshalling?
Well, protobuf allows to generate easy to use code for parsing defined data and service stubs for many languages and is one of the faster and less bandwidth wasting options
Besides the other comments already here about code gen & contracts, a bigger one for me to step away from json/xml is binary serialization.
It sounds weird, and its totally dependent on your use case, but binary serialization can make a giant difference.
For me, I work with 3D data which is primarily (but not only) tightly packed arrays of floats & ints. I have a bunch of options available:
1. JSON/XML, readable, easy to work with, relatively bulky (but not as bad as people think if you compress) but no random access, and slow floating point parsing, great extensibility.
2. JSON/XML + base64, OK to work with, quite bulky, no random access, faster parsing, but no structure, extensible.
3. Manual binary serialization: hard to work with, OK size (esp compressed), random access if you put in the effort, optimal parsing, not extensible unless you put in a lot of effort.
4. Flatbuffers/protobuf/capn-proto/etc: easy to work with, great size (esp compressed), random access, close-to-optimal parsing, extensible.
Basically if you care about performance, you would really like to just have control of the binary layout of your data, but you generally don't want to design extensibility and random access yourself, so you end up sacrificing explicit layout (and so some performance) by choosing a convenient lib.
We are a very regularly sized company, but our 3D data spans hundreds of terabytes.
(also, no, there is no general purpose 3D format available to do this work, gltf and friends are great but have a small range of usecases)
Type safety. The contract is the law instead of a suggestion like JSON.
Having a way to describe your whole API and generate bindings is a godsend. Yes, it can be done with JSON and OpenApi, yet it’s not mandatory.
In most languages protobuf is eaiser because it generates the boilerplate. And protobuf is cross language so even if you are working in javascript where json is native protobuf is still faster because the other side can be whatever and you are not spending their time parsing.
It's not just about traffic. IoT devices (or any other low-powered devices for that matter) also like protobuf because of its comparatively high efficiency.
I never used it, coding since 1986.
Protobuf is fantastic because it separates the definition from the language. When you make changes, you recompile your definitions to native code and you can be sure it will stay compatible with other languages and implementations.
I dunno, are you sure you can manually write correct de/serializaiton for JSON and XML so strings, floats and integer formats correctly get parsed between JavaScript, Java, Python, Go, Rust, C++ and any other languages?
Do you want to maintain that and debug that? Do you want to do all of that without help of a compiler enforcing the schema and failing compiles/CI when someone accidentally changes the schema?
Because you get all of that with protobuf if you use them appropriately.
You can of course build all of this yourself... and maybe it'll even be as efficient, performant and supported. Maybe.