Notably: "Fetching complicated object graphs require multiple round trips between the client and server to render single views. For mobile applications operating in variable network conditions, these multiple roundtrips are highly undesirable."
Now, I'm wondering how they manage to make the computation of the responses on the server side no too expensive. It seems clear that there is a risk in such a system to define queries that pull way too more data at once. Also, the question of pagination comes to mind. How can you handle that efficiently?
In terms of being not too expensive, an important attribute of the system is that the server publishes capabilities that clients selectively use. For example, we explicitly do not allow clients to send up arbitrary strings for filtering and query predicates and what not; servers have to explicitly expose those via arguments to fields. eventMembers(isViewerFriend: true) { ... } or similar formulations that are encoded in the type system. This prevents people from ordering inefficiently (e.g. large data sets without indexes).
Re: pagination. This is absolutely a core pattern that we are excited to talk about as we explain the system more. Broadly we handle pagination through call arguments, e.g. friends(after: $someCursor, first: 10) { ... } There's a lot of subtlety there which I won't go into until we dive into that topic deeply.
What's really exciting is an expressive model that allows code to state what dependencies it has. The transport and fulfillment of needed data- what intermediary stores the data gets to and how pending views are signaled availability- is a more mechanistic, rote, already faced task, albeit one that each company seems to tackle on their own. The step further- declaring and modeling dependencies, is what makes GraphQL an interesting capabilities.
Soundcloud's 3 year old blog post is a good reference to show that "instance stores"- these clientside object database services- have been around for a good while, and can be done fine with REST as easily as without.
In practice, I think that an GraphQL API is still a form of REST.
People hate on REST when they should be hating on the bad instances of things that others made and said 'But it's REST!', regardless of whether it was or wasn't or that person had RTFM https://www.ics.uci.edu/~fielding/pubs/dissertation/fielding...
> We are interested in the typical attributes of systems that self-identify as REST, rather than systems which are formally REST.
Edit: I see that they partially address this: "Many of these attributes are linked to the fact that “REST is intended for long-lived network-based applications that span multiple organizations” according to its inventor. This is not a requirement for APIs that serve a client app built within the same organization."
* Client communicate their desired projection and page size (via $select and $top query string parameters), which can then easily be mapped by the service into efficient calls to the underlying data store.
* OData client page sizes are polite requests, not demands. The server is free to apply its own paging limits, which are then communicated back to the client along with the total results count and a URL that can be followed to get the next page of results. Clients are required to accept and process the page of entities they are given, even if that number differs from the count which was requested due to server limits.
I'd assume GraphQL will adopt similar functionality, if it hasn't already.
In some ways one of the biggest advances I see from GraphQL/Relay is that should avoid most of versioning hell for mobile - there's effectively an agreed interop language for communicating data needs, and thus backwards compatibility during API evolution should be far less complicated.
For example, if you request your top three friends and their most recent post using REST you'll probably need to do four requests. And you can't parallelize them because you need to know your friends' IDs before you can construct the comment requests.
The client application should drive the decisions about what data to fetch. After all it's the client that knows what it is actually going to be doing with the data, not the server. Current approaches like having a "fields" option on a REST endpoint are at best a hacky approximation to this.
https://www.youtube.com/watch?v=hOE6nVVr14c
It's a different implementation of the same concept.
On the server you end up caching at lower levels in the stack. For example a query for user(id: 123456) {id, name} is going to need data from a key-value store containing user info. That access can easily be cached with something like memcache, saving load on the database. Cache-invalidation problems are also much easier to solve at these layers.
And it isn't obvious what they're using for transport, but it seems like they aren't attempting to model programmatic resources as web resources the way that OData does. This is an okay decision if they're trying to make it transport-neutral (i.e. you can issue the same GraphQL request via Thrift or by HTTP POST), but in that direction lies the sins of SOAP.
In the past I've written a client-side caching layer for OData which was capable of doing the same automatic batching and partial cache fulfillment for hierarchical queries that they describe in the article. It is a good tool for writing complex client applications against generalized data services without giving up performance, and I'm not surprised that companies in our post-browser world are starting to move in that direction.
I'm a little bummed that Facebook is throwing its considerable weight behind yet another piece of NIH-ware, though. Beating up the REST strawman was a poor use of half of this article; I'd be much more interested to hear why we need GraphQL when there exists a standard like OData.
It would be nice having custom adapter endpoints for your clients and devices that in turn fans out multiple calls on the backend border (if you are using a service based backend architecture), while still having the option of going directly to the service endpoints for third party integrations and what not. Having this adapter layer based on GraphQL would be neat, assuming one could break down GraphQL queries to individual REST based endpoint calls.
[0] https://github.com/RickWong/react-transmit
[1] http://www.getbreezenow.com/
[2] https://www.youtube.com/watch?v=WiO1f6h15c8
P.S. Also, GraphNoQL [3] came out quickly after the announcement, but there's been no progress ever since.
https://github.com/uber/jetstream
https://github.com/uber/jetstream-ios
It's a considerably different model however. More about realtime updates.
I see this as the perfect companion for REST and I hope it will be standardized. Kudos
user {
id: 3500401,
name,
isViewerFriend,
profilePicture {
size= 50,
uri,
width,
height
}
}
As shown you could use a different indicator for filter properties that should not be included in the serialized object graph.Maybe the idea is to do that all on the server? Or in the Store? Very curious to see the implementation.
2. I think the biggest issue with HTTP verbs is that there is not an flexible way for the client to control the data that comes back from server. GETs dont have a body and the other verbs are for add/mod of data. I'm assuming that they are using POST for everything.
The question I have is rather whether Relay is an alternative to Flux or that the two are meant to be used in conjunction. Also, which usecases would make Flux better and which Relay (if they are not to be used together).