> The main problem GraphQL tries to solve is overfetching.
My issue with this article is that, as someone who is a GraphQL fan, that is far from what I see as its primary benefit, and so the rest of the article feels like a strawman to me.
TBH I see the biggest benefits of GraphQL are that it (a) forces a much tighter contract around endpoint and object definition with its type system, and (b) schema evolution is much easier than in other API tech.
For the first point, the entire ecosystem guarantees that when a server receives an input object, that object will conform to the type, and similarly, a client receiving a return object is guaranteed to conform to the endpoint response type. Coupled with custom scalar types (e.g. "phone number" types, "email address" types), this can eliminate a whole class of bugs and security issues. Yes, other API tech does something similar, but I find the guarantees are far less "guaranteed" and it's much easier to have errors slip through. Like GraphQL always prunes return objects to just the fields requested, which most other API tech doesn't do, and this can be a really nice security benefit.
When it comes to schema evolution, I've found that adding new fields and deprecating old ones, and especially that new clients only ever have to be concerned with the new fields, is a huge benefit. Again, other API tech allows you to do something like this, but it's much less standardized and requires a lot more work and cognitive load on both the server and client devs.
If you generate TypeScript types from OpenAPI specs then you get contracts for both directions. There is no problem here for GraphQL to solve.
Agree whole-heartedly. The strong contracts are the #1 reason to use GraphQL.
The other one I would mention is the ability to very easily reuse resolvers in composition, and even federate them. Something that can be very clunky to get right in REST APIs.
Pruning the request and even the response is pretty trivial with zod. I wouldn't onboard GQL for that alone.
Not sure about the schema evolution part. Protobufs seem to work great for that.
But if you just want a nicely typed interface for your APIs, in my experience gRPC is much more useful, because of all of the other downsides the blog author mentioned.
Facebook had started bifurcating API endpoints to support iOS vs Android vs Web, and overtime a large number of OS-specific endpoints evolved. A big part of their initial GraphQL marketing was to solve for this problem specifically.
> when a server receives an input object, that object will conform to the type
Anything that comes from the front end can be tampered with. Server is guaranteed nothing.
> GraphQL always prunes return objects to just the fields requested, which most other API tech doesn't do, and this can be a really nice security benefit.
Request can be tampered with so there's additional security from GraphQL protocol. Security must be implemented by narrowing down to only allowed data on the server side. How much of it is requested doesn't matter for security.
Sorry but not convinced. How is this different from two endpoints communicating through, lets say, protobuf? Both input and output will be (un)parsed only when conforming to the definition
I 100% agree that overfetching isn't the main problem graphql solves for me.
I'm actually spending a lot of time in rest-ish world and contract isn't the problem I'd solve with GraphQL either. For that I'd go through OpenAPI, and it's enforcement and validation. That is very viable these days, just isn't a "default" in the ecosystem.
For me what GraphQL solves as main problem, which I haven't got good alternative for is API composition and evolution especially in M:N client-services scenario in large systems. Having the mindset of "client describes what they need" -> "graphql server figures out how to get it" -> "domain services resolve the part" makes long term management of network of APIs much easier. And when it's combined with good observability it can become one of the biggest enablers for data access.