I have come across systems that use GET but with a payload like POST.
This allows the GET to bypass the 4k URL limit.
It's not a common pattern, and QUERY is a nice way to differentiate it (and, I suspect will be more compatible with Middleware).
I have a suspicion that quite a few servers support this pattern (as does my own) but not many programmers are aware of it, so it's very infrequently used.
Sending a GET request with a body is just asking for all sorts of weird caching and processing issues.
Elasticsearch comes to mind.[0]
The docs state that is query is in the URL parameters, that will be used.I remember that a few years back it wasn't as easy - you HAD to send the query in the GET requests body. (Or it could have been that I had a monster queries that didn't fit through the URL character limits.)
0: https://www.elastic.co/docs/api/doc/elasticsearch/operation/...
I think graphQL as a byproduct of some serious shenanigans.
"Your GraphQL HTTP server must handle the HTTP POST method for query and mutation operations, and may also accept the GET method for query operations."
Supporting body in the get request was an odd requirement for something I had to code up with another engineer.
> I have come across systems that use GET but with a payload like POST.
I think that violates the HTTP spec. RFC 9110 is very clear that content sent in a GET request cannot be used.
Even if both clients and servers are somehow implemented to ignore HTTP specs and still send and receive content in GET requests, the RFC specs are very clear that participants in HTTP connections, such as proxies, are not aware of this abuse and can and often do strip request bodies. These are not hypotheticals.