It is far better for the Backend to provide Frontend a contract--can do it with OpenAPI/Swagger--here are the endpoints, here are the allowed parameters, here is the response you will get--and we will make sure this narrowly defined scope works 100% of the time!
It sure is better for the backend team, but the client teams will need to have countless meetings begging to establish/change a contract and always being told it will come in the next sprint (or the one after, or in Q3).
> This leads to far more fingerpointing/inefficiency in the log run, despite whatever illusion of short-term expediency it creates.
It is true it can cause these kind of problems, but they take far, far, far less time than mundane contract agreement conversations. Although having catastrophic failures is usually pretty dire when they do happen, but there are a lot of ways of mitigating as well like good monitoring and staggered deployments.
It is a tradeoff to be sure, there is no silver bullet.
If your front-end engineers end up twiddling their thumbs (no bugs/hotfixes), perhaps there is time (and manpower) to try to design and build a "new" system that can cater to the new(er) needs.
With my single WordPress project I found that WP GraphQL ran circles around the built-in WP REST API because it didn't try to pull in the dozens of extra custom fields that I didn't need. Not like it's hard to outdo anything built-in to WP tho...
It's really not, it's not exposing your whole DB or allowing random SQL queries.
> It is far better for the Backend to provide the Frontend a contract
GraphQL does this - it's just called the GraphQL "schema". It's not your entire database schema.
And the REST API can still get hammered by the client - they could do an N + 1 query on their side. With GraphQL at least you can optimize this without adding a new endpoint.
- Each individual thing available in the request should be no less timely to handle than it would via any other api
- Combining too many things together in a single call isn't a failing of the GraphQL endpoint, it's a failing of the caller; the same way it would be if they made multiple REST calls
Do you have an example of a call to a GraphQL API that would be a problem, that wouldn't be using some other approach?
> Then we just come back full round trip to REST
Except that GraphQL allows the back end to define the full set of fields that are available, and the front end can ask for some subset of that. This allows for less load; both on the network and on what the back end needs to fetch data for.
From a technical perspective, GraphQL is (effectively) just a REST API that allows the front end to specify which data it wants back.
When you use the "REST" / JSON-over-HTTP pattern which was more common in 2010, changes in query patterns necessarily involve the backend team, which means they are aware of the change & have an opportunity to get ahead of any performance impact.
I've never gotten a good answer to that question, so I've never even considered GraphQL in such systems where it may have made sense.
I can see it in something big like Jira or GitHub to talk to itself, so the backend & frontend teams can use it to decouple a bit, and then if something goes wrong with the performance they can pick up the pieces together as still effectively one team. But if that crosses a team boundary the communication costs go much higher and I'd rather just go through the usual "let's add this to the API" discussions with a discrete ask rather than "the query we decided to run today is slow, but we may run anything else any time we feel like it and that has to be fast too".
How does this follow? A client team can decide to e.g. put up a cross-sell shelf on a low-traffic page by calling a REST endpoint with tons of details and you have the same problem. I don't see the difference in any of these discussions, the only thing different is the schema syntax (graphql vs. openapi)
That being said, it's a lot easier to setup caching for REST calls.