98% of the literature I have seen about GraphQL has always been positive, and it seems to have become the standard recommendation for anything from starting a small web app as a side project, to running a fortune 500 company. I'm fairly sure that this is because a lot of larger tech companies use GraphQL in some way, so there is this misconception that using it must be the correct choice for all use cases, but who knows.
I have worked with GraphQL in a few different companies now, and in all except the largest company where there was a dedicated team of engineers that worked on the GraphQL implementation for the company, I have felt strongly that we would have been better off with a more boring approach like REST.
I am curious to hear others perspectives on this, do you like using GraphQL? do you disagree with me and think that it is actually a good solution for the "middle ground" of use cases?
Also, I do think that GraphQL has some cool features and I'm not trying to write it off as a useless tool with no benefits. Like all tools, it has it's place but I think that where it's really helpful is not where people end up using it in the vast majority of use cases.
The only benefit I have personally experienced is that since we have mobile apps more often than not with Apollo, having the graphql for both mobile _and_ web react is nice.
But with Phoenix Liveview that benefit quickly erodes for me. I say good riddance.
Compared to REST, I would argue auto-generated GraphQL clients are superior (compared to auto-generated REST clients based on something like OpenAPI).
Compared to gRPC, I think it has the advantage that it's much easier to use in the browser and many people seem to prefer text-based protocols for debugability.
What are you comparing it to?
Why do you prefer auto-generated GraphQL clients?
If you are working in a small project with just a few engs. The additional lift for GraphQL might not be worth it. Especially if the team is not already well versed.
It's only for side projects, but I personally prefer this type of approach over both REST and GraphQL.
You just have to think of it as another function.
In addition I define contract using TypeScript code something like this:
defineOperation('getPosts', { input: { filters: dataTypes.array(...) }, output: dataTypes.array(dataTypes.object({ id: dataTypes.uuid(), title: dataTypes.string()}) })
And this generates typed backend function handler and a frontend client function. Backend is TypeScript also, but it could generate for any language in theory.
I also generate database entities like that, and auto create migrations, etc. You could in theory add documentation into that defineOperation as well.
Very simple, and very smooth in my view. Also debuggable since it's text based (using POST method) with json { "operationName": "getPosts", { "filters": ... }}
In frontend I will have a generated, typed function I can use client.getPosts
and in the backend I just have to define export const getPosts: GetPosts = (requestManager) => ... where GetPosts type is generated.
It's because I think REST methods don't make much sense and the endpoints are arbitrary, in the end it's just easier to reason of everything as a function.
I also don't particularly like how GraphQL forces you into this certain mindset, that feels like in many cases it holds you back rather than makes you productive.
The main benefit to us is that it's a huge time saver. There is almost no similar/duplicated code for similar operations (for example search, list and get operations). It's also very easy to write a generic API once, without thinking to much about how the clients will use it, and have it used in ways that weren't anticipated at the time it was written.
It's also pretty easy to have the API made of entities that are always the same, rather than having routes that return slightly different data.
Another benefit is that the default / basic tooling works well and with no setup. The playground works, it's easy to generate idiomatic clients in many languages, which is not really the case with openapi or grpc.
However, although I have no data to back this up, I feel that the adoption is not that high, and more advanced or unusual tooling doesn't exist or isn't very good or progressing.
Another problem is that developers don't seem to grasp the best practices intuitively and the docs don't make it very clear what they are, but it's necessary to use them to have a useful API and not a slow, more complicated version of REST.
For me I saw the most benefit when I used the schema to define what to display in the frontend, all the logic of what to display is done on the server and my frontend just becomes simple components that render the pulled schema.
""' Let's return to the user profile example. With GraphQL, instead of the frontend deciding what user data to display and where, this decision-making process can be moved to the server. When the GraphQL server receives a request for user data, the request includes specific fields that the frontend wants. In response, the server sends back exactly what was requested - no more, no less. """
First, it says that the server, instead of the FE decides. Immediately after, it says that "the request includes specific fields that the frontend wants".
If the request specifies the particular fields it wants, then it is the FE deciding what data to display...
So I think ChatGPT has this totally wrong. I'd love to be corrected here, but my understanding is that a (perceived) problem with REST, or at least the way it's used, is that the backend often requires specialized endpoint that anticipate the needs of a particular frontend context. In that sense, they are tightly coupled. This happens because, in order to avoid having to execute dozens of requests in order to fetch the data needed on a particular page or screen, you write some endpoint that is intended, from the outset, for a particular UI in the client app. You also have the opposite problem where an endpoint returns far more data than the client cares about in certain spots. Enter GraphQL: since the client queries for what it needs, it can specify precisely what it wants, ranging from a single nested field to a much more complicated object.
Note: I'm not judging the merits of this GQL pitch. I'm just saying that's the argument, as I've understood it. That being the case, not only is ChatGPT contradictory in the above cited section, but it also has the whole argument backwards.
As always, very glad to be corrected on any of this.
We have an inbox list that is a combination of three different entities (direct message, job request, support message), we display all of these in a unified inbox, rather than pulling all the data with REST and sorting through what fields to use to display as a title and what status maps to which styling/label etc. I can just use my InboxPageQuery and render exactly what it tells me to, being able to build or change entire pages just by pushing new server code really helps with speeding up our dev time.
{
"data": {
"inbox": {
...
"inboxItems": [
{
...
"subtitle1": {
"__typename": "InboxStandardText",
"accessibilityText": "Hi Jordan.\n\nSarah has posted a handyman job near you",
"components": [
{
"__typename": "InboxStandardTextComponent",
"text": "Hi Jordan.\n\nSarah has posted a handyman job near you...",
"type": null
}
]
},
"subtitle2": {
"__typename": "InboxStandardText",
"accessibilityText": "You declined this job today",
"components": [
{
"__typename": "InboxStandardTextComponent",
"text": "Declined",
"type": "errored_text"
},
{
"__typename": "InboxStandardTextComponent",
"text": " · ",
"type": "errored_text"
},
{
"__typename": "InboxStandardTextComponent",
"text": "07/06/23",
"type": "errored_text"
}
]
},
...
}
],
...
}
}
}This limitation cropped up in a project and was a frustrating (re)discovery. You have to pick a maximum depth and explicitly define that depth.
It abuses POST. No way to tell by a quick glance at the network tab the purpose of a request.
Whichever team controls the implementation of the resolvers decides the return structure which may be less than ideal for the front end team consuming the gql response so a good bit of additional mapping over the response is often necessary.
GQL types are yet another type system I have to learn. I'm happy to, but compared to TypeScript, the GQL type system feels very lacking. To be fair, most type systems feel lacking compared to the TypeScript type system (its so insanely flexible and expressive).
It's very easy to get to a point of a GQL query that makes so many sub queries that the single request takes a long time.
Overall lots of foot guns. I much prefer a REST api and best yet an SDK.
Others have also pointed out GQL can't take advantage of browser caches which is a big UX loss.
I never got the hype around it but others like it so I'm glad it exists for them.
It does, this schema works:
type Item {
id: ID!
children: [Item!]!
}
type Query {
root: Item!
}
GraphQL queries describe the shape of the response, so with this schema it's not possible to ask recursively for "the full tree up to an arbitrary depth". One way to solve this would be to add a "descendants" field that returns a list of all the children, grand-children...IE:
fragment CategoriesRecursive on Category {
subcategories {
...SubcategoryFields
subcategories {
...SubcategoryFields
subcategories {
...SubcategoryFields
}
}
}
}
So you have to build your schema with a maximum supported depth. This is not infinitely recursive which is a limitation of the GQL type system.Implemented a gql API a few times and it felt like it was more efficient than a typical REST API with all the caching and flexibility, as well as feeling more functional because of the handlers. That said, it's not that much better and many apps simply don't need it. Often it's okay to just have a "gimme everything whenever I want it" endpoint that sends 100KB of data.
I'd say graphql is useful when you're worried about requests killing your server.
No apollo, no prisma, just plain strings in graphql format sent with fetch to a Hasura endpoint hosted by nhost. Automatically built resolvers, great low-code admin UI, easy query building, open source, and builds off of Postgres. It's actually been so effective it's hard to go back to anything else, including Supabase.
Having the ability to define small schemas for each module gives us a great way to communicate the contract provided by a backend module and the parts of the API required by its associated frontend module.
It can be extended very easily and it gives each monolith the ability to ask for anything they want. On top of that we are using a ton of subscriptions so having the ability to use a common language even for the websocket parts is great.
So in the end, the ability to request anything from the API, the simplicity to extend and compose APIs and the support for subscriptions are features that we would have needed in REST which are "included" in GraphQL.
Out of curiosity, is the reason for the contract between these modules not being fully known / being able to change often because of how your company wants to use them or is there another reason? I'm just wondering if the reason for this is wanting to be able to reuse modules across different projects for multiple clients or writing integrations for other tools or something totally different.
To give you some examples, in one use case our modules are used within desktop applications running locally, in a couple others they are used in regular web applications, in another they are embedded in a Java server running inside a VS Code extension (no network connection required everything runs locally). Sometimes we have our regular frontend modules which are performing some queries to our backend modules but in the VS Code extension for example, the VS Code integrations performs other kind of queries to our backend modules.
Each modules bring some capabilities and our projects can take those capabilities and reuse some or all of them.
But that's a misconception. If your contracts between frontend and backend are not known, you cannot just "open" your backend resources to the frontend so they are queried at will. That creates a huge entanglement that is only discovered after years of maintenance (at the beginning it may seem as a huge boost in productivity, but it's just pure tech debt: the debt you pay for not designing contracts between frontend and backend)
With GraphQL, there is a minimum of communication that it is done by the specs and the choices of the frameworks. There are some costs and downsides but for me they are definitively worth it.
Backend scala / php devs looked at it with distaste in their eyes, coming up with various reasons why its not a good idea.
And to be fair with tools like openapi / grpc a lot of the benefits of graphql can be replicated.
Even if they are clumsy and fragmented compared to it in my eyes, they still work well enough and keep devs in all camps happy enough, and compromise as they say is “when everyone is equally unhappy”.
Now placed in situations like this I am usually forced to reimplement graphql tooling with the chosen api tech (for example https://github.com/ovotech/laminar) and if me as a single dev could do it, then I’d wager if there was stronger tech leadership, all the other tech tribes could just smooth the edges that they didn’t like for using graphql in their respective languages … but sadly that has not been my experience.
In practice, not so much but mostly due to teams not using it properly or leveraging it in a way that actually makes sense. I've seen teams implement gql for something that was used by 1 consumer (another service) that asked for the entire fields in payload anyway.. at least someone got promoted (I would not have promoted them for using the wrong tool for the job, when the right tool would have allowed them to ship faster).
I use go with ent and the gql extension. The way it works, having hooks and privacy extensions is just so lovely. But querying is rather painful for me. It requires an additional tool, because I'm not used to the syntax. It feels very non-intuitive, but I'm rather at the beginning of it, so it's a me thing.
I was thinking why not just pure SQL queries as an alternative? All it would require is a proper filtering/security middleware.
I'd say it's optimized for a medium complexity API, in terms of types at least. If you don't know the types in advance, because the users can customize fields somehow, then you can't make a particularly useful GraphQL schema.
There is an underlying assumption that you will have runtime stable objects being queried/mutated.
Maybe I was bad at defining REST interfaces. But the interface-definition-language aspect of GQL has been invaluable. It's type safety across the network boundary.
Our server side library auto generates DTO structs to satisfy the schema resolvers. It just removes a lot of boilerplate work when adding routes or new return types.
Being able to co-locate data queries with components in React has been a huge game changer for me when writing reasonably complex frontend apps. However, GraphQL on its own never struck me as being any more beneficial than Rest.
Pretty much like Redux or Facebook's ridiculous architecture before that (flux? Something?)
I hear to more and more React and next.js hate everyday as well.
Hopefully this is a sign we're ready for the next great thing in frontend.
Not sure what it will be; personally I moved all the projects I can to solid start.
What led you to this feeling? I know you want to hear from people on why GraphQL is good, but why would REST have been a better choice in that situation? I'm interested as someone who works with both day to day.
Here were some of the things I noticed when comparing to REST:
- Caching can become a nightmare and requires a lot of effort to get working correctly for non trivial use
cases / you cannot really use the already "built in" cache control headers in the browser with GraphQL
- Another caching one, but you almost are required to have some kind of server side cache in addition to
the client side cache, it can quickly become disorienting trying to figure out exactly where something
is cached and why, or why you are getting stale data, etc...
- Some abstractions in GraphQL can make the code hard to follow / read in my opinion (data loaders for
example) and also make it hard to follow where the data is actually coming from especially in federated
subgraphs
- Error handling in GraphQL can be really unintuitive and more work is needed to not have the error
response come back as a 200 status code (or handle it correctly if it is an error inside of a 200)I would use GraphQL if it were natively supported (just like GET/POST/PATCH is, so no libraries on top needed).
But then you got cargo cult thing and now we get what we get.
On a positive side - it reduces unemployment :)