i don't know about you but in my experience, unless you have Google's size microservices and infrastructure gRPC (with protocol buffers) is just tedious.
- need an extra step when doing protoc compilation of your models
- cannot easily inspect and debug your messages across your infrastructure without a proper protobuf decoder/encoder
If you only have Go microservices talking via RPC there is GOB encoding which is a slimmed down version of protocol buffer, it's self describing, cpu efficient and natively supported by the Go standard library and therefore probably a better option - although not as space efficient. If you talk with other non-Go services then a JSON or XML transport encoding will do the job too (JSON rpc).
The graphQL one is great as what is commonly known as 'backend for frontend' - but inside the backend. it makes the life of designing an easy to use (and supposedly more efficient) API easier (for the FE) but much less so for the backend, which warrants increased implementation complexity and maintenance.
the good old rest is admittedly not as flexible as rpc or graphql but does the job for simpler and smaller apis albeit I admit i see, anecdotally, it being used less and less
Having used gRPC in very small teams (<5 engineers touching backend stuff) I had a very different experience from yours.
> need an extra step when doing protoc compilation of your models
For us this was hidden by our build systems. In one company we used Gradle and then later Bazel. In both you can set it up so you plop a .proto into a folder and everything "works" with autocompletes and all.
> cannot easily inspect and debug your messages across your infrastructure without a proper protobuf decoder/encoder
There's a lot of tooling that has recently been developed that makes all of this much easier.
- https://github.com/fullstorydev/grpcurl
- https://github.com/uw-labs/bloomrpc
You can also use grpc-web as a reverse proxy to expose normal REST-like endpoints for debugging as well.
> If you talk with other non-Go services then a JSON or XML transport encoding will do the job too (JSON rpc).
The benefit of protos is they're a source of truth across multiple languages/projects with well known ways to maintain backwards comparability.
You can even build tooling to automate very complex things:
- Breaking Change Detector: https://docs.buf.build/breaking-usage/
- Linting (Style Checking): https://docs.buf.build/lint-usage/
There's many more things that can be done but you get the idea.
On top of this you get something else that is way better: Relatively fast server that's configured & interfaces with the same way in every programming language. This has been a massive time sink in the past where you have to investigate nginx/*cgi, sonic/flask/waitress/wsgi, rails, and hundreds of other things for every single language stack each with their own gotchas. gRPC's ecosystem doesn't really have that pain point.