When you're using their internal language it's often great. Any other and it's often a nightmare. For example, it's very common to see people write a bloody awful .Net wrapper that are completely non-idiomatic and a complete pita to use.
Often they're written in a way the author clearly doesn't understand OO or hasn't kept up with C# and still thinks it's just like Java so writes extremely old fashioned code. And namespaces. They want you use a million namespaces. It's a minor thing, but a completely unnecessary complication. They could stick everything in one namespace and get no conflicts.
And then, because they've put out client libraries, they don't document their API properly.
Google, as usual, are the worst offender, their .Net library is really bad and incredibly overcomplicated. It does make you wonder about all the hype of 'best' engineers.
The other problem is that they think you'll be using their API one way, when it needs to be another and their code just gets in the way, but because they don't have a snippet without using their library, you end up having to ILSpy their library and then get greeted with shockingly bad code with millions of pointless interfaces that only get used once, because, again, they don't understand .Net.
Maybe their "'best' engineers" are working on anything but .Net? My impression of the Google culture is that they're more focused on platforms for which .Net is not a factor.
How about this message signature:
Twilio::PhoneNumbers.buy_a_number!(options) #returns the new number or raises an error
I have this implemented in the bowels of Appointment Reminder, because back in the day Twilio did not ship that functionality with the API. The "snippet" to do it requires about 30 lines. They're also probably the wrong 30 lines, because e.g. if Internet gremlins bushwhack one of the HTTP requests, it dies uncleanly. I'd have been mighty obliged if it was one line which worked atomically. (It is, to my understanding, now that there exists a decent first-party Twilio library. I recall showing that method on a slide at Twilio HQ one day while begging for that first party library to get created.)
And the reasons in the post are not particularly compelling. 1) Batch requests are usually unnecessary or benefit from a call optimized for batching. 2) Caching rarely needed, potentially dangerous and can be done elsewhere. 3) Throttling can/should be performed elsewhere and no way to prevent DOS anyway. 4) Timeouts are usually easy. 5) GZIP rarely necessary. 6) Dangerous to let someone else's code do it.
This way I can bring into my application the following: - Parameterised methods with code doc (so when I reference it I can see what's what in my IDE). - Exception handling. - My own batch methods in the absence of it in the API. E.g. book delivery date API = get delivery slots for address, select appropriate delivery slot matching the date, book it. All this can be one client method which has an exception for when things go wrong.
if the server already has the bytes gzipped, it is often pure win to ship the gzipped bytes: consider that the client may be able to finish uncompressing the gzipped response earlier than it could otherwise have received the last byte of the uncompressed response.
It's a load of effort to correctly consume from a RESTful webservice currently.
I should be able to get going in my REPL with something like:
>>> from rest import client
>>> proxy = client(url)
>>> print proxy.resources
['foo', 'bar']
>>> help(client.foo)
Help text from the rest service...
>>> client.foo(123)
321
# repeat calls transparently handle caching >>> from hammock import Hammock as Github
>>> # Let's create the first chain of hammock using base api url
>>> github = Github('https://api.github.com')
>>> # Ok, let the magic happens, ask github for hammock watchers
>>> resp = github.repos('kadirpekel', 'hammock').watchers.GET()
>>> # now you're ready to take a rest for the rest the of code :)
>>> for watcher in resp.json: print watcher.get('login')
kadirpekel
...
..
Your "resources" example only works if there's a /resources endpoint, but client.foo(id).GET() works just fine.For instance, on iOS we have the wonderful RestKit Client[0]. If you create your own client it means I would have to write specific cases for your API and miss all RestKit features. Don't get me wrong, I could still use the RestKit with your API, but when I see an API client available, I always think "this API may be bad designed, it needs specific code".
- Referring to other data paths in the API in a standard format
- Authentication. Shy of OAuth, it is totally different on each API. Most APIs avoid OAuth because its a PITA
- Partial PUTs vs PATCH vs sub-resources
All sorts of minor things are different across rest APIs, and that results in client libraries needing to be widely different.
We are in need of a standardized API format which we can build compatible server & client libraries against. Something like SOAP for the JSON era. There are a few out there, but none have really gone anywhere. Are any of these extended-REST wrappers in production by any big companies? I'd love to be corrected.
If not, maybe a high-profile company with a really nice API design could publish a standard on their API structure and refactor out their transport code to provide us with these libraries. If successful, they could be known for introducing a widely-used transport layer for the web industry.
I don't get what you mean about data paths, though. Even if you had a hypothetical good REST client you'd still want a single point to do the URL string assembly to save typing and do some validation but this doesn't really require a whole different client.
Needless to say, OData is a standard pushed by Microsoft, so the ecosystem for libraries is is mostly .NET, although they provide a JavaScript library as well (datajs).
https://en.wikipedia.org/wiki/Web_Application_Description_La...
I guess I really don't get REST, shipping a REST client for a specific API is a thing. If you're gonna provide a client, who cares if it's really REST after all?
If the client lib has an API-specific way of handling "caching", say, then the number of ways of handling "caching" is O(n) in the number of APIs you consume. If you use the REST APIs directly, then the HTTP's standard, debugged, documented caching mechanism is the only one you ever need to know.
I think GET would be a reasonable default here, perhaps 123 is a query string parameter.
For update / POST, create / PUT etc. it's less clear cut in my example language of Python since all we have is a callable, we don't have constructs such as "new" to map behaviours to (repurposing "del" isn't possible).
Perhaps there's an extra parameter:
client.account(id=123, name="something", _method=PUT)
Maybe there's a postfix operation: client.account(id=123, name="something").PUT()
Maybe the verbs are seperate: from rest import client, PUT
PUT(client.account(id=123, name="something"))
I think the postfix system probably makes most sense. I'm sure there's other ways i could come up with.Does this suck?
However, I've also seen the other side of this working at IFTTT where (at the time) we had a gemfile a mile long. That got really hairy. I now try to avoid using clients. I go into it in great detail here: https://www.youtube.com/watch?v=dBO62A3XaSs and we've talked about this many times on trafficandweather.io if you want to learn more.
I love Twilio's API, I just don't understand the REST part. I fail to see how "PATCH /accounts/123/ <postbody>" is better than "POST /accounts/updateaccount <postbody accountid=123 ...>", especially when hidden behind a lovely client.
What I really don't like is they don't accept my accept headers, and I have to change the extension on the end. Going to /.json feels like ilk to me.
Is this: https://github.com/hannestyden/hyperspec the HyperSpec you refer to?
[0]: https://speakerdeck.com/johnsheehan/building-api-integration...
Do you have any more information about automatic service/endpoint discovery and also about the smart HTTP client you use?
I would have thought that failing over to another endpoint address would have been done at a load balancer level.
We definitely need to publish more about it
https://code.google.com/p/google-apis-client-generator/
The service is defined in a platform-neutral discovery document, which can be used by any provider:
https://developers.google.com/discovery/v1/reference/apis
There are generators for Python, Java, .NET, Objective-C, PHP, Go, GWT, Node.js, Ruby, and others.
So we've gone full circle and arrived back at SOAP. Why didn't we just keep using SOAP in the first place?
Caching, throttling, timeouts, gzip, error handling.
This all seems like something any serious user of a REST API should know how to handle very well. If not otherwise, then by use of a standard library.Why does every API owner have to write basically identical clients in every language out there?
For example, I have extensive use of Twilio and Pin Payments (among other APIs) in Appointment Reminder. My use of the Twilio API can potentially spike into the hundreds of requests a second range, but if it gets into thousands of requests a second, that needs to throw a PresidentOfMadagascarException ("Shut. Down. Everything.") Code which does not interact with the Twilio API but instead implements meta-features on top of the Twilio API comprises 80%+ of lines of code implicating the Twilio API in my application.
By comparison, querying the Pin Payments API doesn't need a rate limit at all, but does need a sane caching strategy, because it requires thousands of API calls to answer a simple, common question like "How much did we sell in 2013?" Again, meta-features for the API comprise over 80% of lines of code implicating that API, but they're totally different meta-features.
AR is doing the 90% case with both of these APIs -- sane defaults in first-party clients would have greatly eased my implementation of them, allowing me to focus on features which actually sell AR, to the benefit of both my business and those of the APIs at issue, since their monthly revenue from me scales linearly with my success.
I fully understand the benefits of REST, but on a lot of projects, a single endpoint and a simple RPC protocol would be easier to integrate without need for a separate client library.
Also, client libraries don't always do the best job of making it clear when you are making api calls over the wire and that can be quite problematic.
For example, I've worked with code where it made an http request to get a price, which was then used in a Model calculation. This code looked completely harmless at the highest level, but each page request was hitting the server 100+ times to do all the calculations needed.
After finding the problem adding some caching was easy and now things run faster. However, having that level of abstraction and indirection makes it far less obvious if/when/where HTTP requests happen and that isn't always a good thing.
I'm not sure I understand this. Surely the point of if-modified-since and etag headers is that you can send them in the GET request and get back a 304, there is no need to do a HEAD-then-GET?
> For POSTs we do NOT encode our data, as CompanyX's REST API expects square brackets which are normally encoded according to RFC 1738. urllib.urlencode encodes square brackets which the API doesn't like.
Retrospectively, I should have asked the company to fix their bug, rather than work around it myself, but at the time I was a less confident programmer.
I know of some large companies which do this already, one of them being braintree, which offers an amazing API in many different languages, but they are already profitable and were bought by paypal.
Two notes:
2) No need to mask requests with a HEAD; a GET can also return a 304 directly.
6) De-duplication of calls: Any method except POST should be idempotent already, hence also a retry-on-error is trivial in those cases.
http://platform.qbix.com/guide/patterns
The Q platform was supposed to take care of the things that you have to do anyway when writing social apps.
Maybe that just means writing a thin wrapper around one of these libraries specific for your API. But don't reinvent the wheel every time.
Your consumers should know how to handle HTTP requests from within their language. If they don't, then no Client for your API will save them.