If the client knows nothing about the meaning of the responses, it cannot do anything with them but show them to a human for interpretation. This would suggest a restful api is not made for system-to-system communication, but requires human mediation at every step of the way, which is precisely not how api’s are used. In short, this definition of restful api’s can’t be right, because it suggests restful interfaces aren’t api’s at all. Once a programmer adds code to parse the responses and extract meaningful data, treating the restful interface as an api, the tight coupling between client and server reappears.
Which is exactly what REST was originally designed to do: provide an architecture for the Internet (not your app or service) that allows for humans using software clients to interact with services developed by programmers other than those which developed the clients. It was about an interoperable, diverse Internet.
If the distributed application is not developed across distributed organizations, particularly independent unassociated organizations, then the architectural style of REST is overkill for what you intend and you could have just kept using RPC the whole time.
The point of the later RESTful API movement was to create distributed applications that leveraged the underlying architecture principles of the internet within their smaller distributed application. The theory being that this made the application more friendly and native to the broader internet, which I do agree is true, but was never the original point of REST.
That said, xcamvber [1] is right: this is me being an old person fighting an old person battle.
The whole idea of embedding links into the data that describe available operations was not seen as useful, because most web pages already do that. That was not a problem that needed to be solved.
But the concept of resource-oriented architectures which leveraged HTTP verbs to act on data with descriptive URIs was extremely useful in an era when interactions with web servers would look something like POST /common/backend/doActMgt.pl
Books like RESTful Web Services came out in 2007 and focused almost entirely on resource-oriented architecture. There's not really much mention of hypermedia in it. It's mostly about building API endpoints.
It also referenced AWS S3 (wow, S3 is old) a lot and treated it as a reference implementation / proof of concept that the idea works and is scalable.
It may not matter for a ton of "APIs", but there are a number of places within applications that would benefit from this form of decoupling vs the static client knowing what to do with endpoints, so conflated the makes actual ReST hard for engineers to understand and utilize.
I think this is most clearly described by two things Fielding wrote (and the original article links to):
https://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arc...
https://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypert...
the trouble is that HTML is anything but generic. If you've ever tried to write a web scraper that can be used on _any_ webapage you quickly discover that its near impossible. I use to belived that there should be one way to use HTML to describe the page content and the rest should be CSS but gave up as its completely inflexable and the approch has been abandoned for simply describing presentation as this is actually practical. You'd need a HTML structural standard, but that will get mostly ignore (as have most of the W3Cs recommendations on that subject).
As you said "REST" and "RESTful API" are different beast, i guess the "-ful" should be more of an "-ish".
This is the best counterpoint in this discussion and it deserves a lot of reflection.
But that reflection should include the realization that this is what the browser does all the time. Browsers don't have any particular semantic model in their head of what any particular form action or hyperlink "means" in terms of domain logic... but they do have some semantics for broad classes of verbs and other interactions associated with markup-represented potential actions, and so they serve as an agent presenting these to the user who does whatever wetware does in processing domain semantics and logic and makes a choice.
This has worked so well over the last 30 years it's been the biggest platform ever.
We're now in territory where we're creating software that's almost alarmingly good at doing some semantic processing out of no particular structure. The idea that we're facing a dead end in terms of potential for clients to automatically get anything out of a hypertext API seems pretty tenuous to me.
What would be very interesting, if HTML didn't evolve with the goal of containing all kinds of data with a human representation too. But now it's mostly redundant and people just prefer encoding things in HTML.
My understanding of the vision is that when all your responses all described using (fielding original) REST API's via RDF, using URI identifiers everywhere -- then a client that has never seen a particular server can still automatically figure out useful things to do with it (per the end-user's commands, expressed to the software in configuration or execution by some UI), solely by understanding enough of the identifiers in use.
You wouldn't need to write new software for each new API or server, even novel servers doing novel things would re-use a lot of the same identifiers, just mixing and matching them in different ways, and the client could still "automatically" make use of them.
I... don't think it's worked out very well. As far as actually getting to that envisioned scenario. I don't think "semantic web" technology is likely to. I am not a fan.
But I think "semantic web" and RDF is where you get when you try to take HATEOS/REST and deal with what you're describing: What do we need to standardize/formalize so the client can know something about the meaning of the response and be able to do something with it other than show it to a human, even for a novel interface? RDF, ostensibly.
The fielding/HATEOS REST, and the origial vision of RDF/semantic web... are deeply aligned technologies, part of the same ideology or vision.
REST, in its most strict form, feels like it was designed for humans to directly interact with. But this is exceptionally rare. Access will nearly always be done programmatically, at which point a lot of the cruft of REST is unnecessary.
It was literally extracted from the browser’s interaction model so… kinda?
> A REST API should be entered with no prior knowledge beyond the initial URI (bookmark) and set of standardized media types that are appropriate for the intended audience (i.e., expected to be understood by any client that might use the API). ...
> When I say hypertext, I mean the simultaneous presentation of information and controls such that the information becomes the affordance through which the user (or automaton) obtains choices and selects actions. Hypermedia is just an expansion on what text means to include temporal anchors within a media stream; most researchers have dropped the distinction.
> Hypertext does not need to be HTML on a browser. Machines can follow links when they understand the data format and relationship types.
That is, htmx.org or intercoolerjs.org might argue that "HATEOAS is [exclusively] for humans", but Roy Fielding doesn't agree, or didn't in 02008.
— ⁂ —
While I'm arguing, I'd also like to take exception to the claim that this discussion is irrelevant to anything people are doing today. It's an "old person battle," as some say, in the sense that old people are the people who have enough perspective to know what matters and what doesn't. REST matters, because it is an architectural style that enables the construction of applications that can endure for decades, permitting the construction of applications that evolve without a central administrator, remaining backwards-compatible with decades-old software, and can include billions of simultaneous users.
This is an important problem to solve, the WWW isn't the last such application anyone ever needs to build, and JSON RPC interfaces can't build one.
The trouble with redefining "REST" to mean "not REST" is that the first step in learning known techniques to solve a problem is learning the terminology that people use to explain the techniques. If you think you know the terminology, but you have the wrong definition in your mind, you will not be able to understand the explanations, and you will not be able to figure out why you can't understand them, until you finally figure out that the definition you learned was wrong.
That's a conclusion I came to after watching it never catch on in the JSON API space and then trying to come up with an explanation as to why. I'd love to hear what he thinks of the idea.
Thank you for the thoughtful comment!
I read quite much about REST and HATEOAS, and it didn't made any sense to me.
Somehow the "magic sauce" was missing. How should a client that doesn't know anything about an API interpret it's meaning?
I felt like an idiot. Like there was some high end algorithm or architecture that completely eluded me.
But it the end, it probably just meant, HATEOAS is for humans.
https://intercoolerjs.org/2016/05/08/hatoeas-is-for-humans.h...
I disagree that it isn't an API, but that's a definition quibble. It is probably more profitable to talk about RESTful systems rather than RESTful APIs, since people think API == machines talking.
I don't understand: An API is an application programming interface, i.e. it is meant to be consumed by other programs. How does that go together with
> REST and HATEOAS are for humans
?
And how does that go together with the requirements of "no interpretation needed" and therefore "little coupling" between client and server that were mentioned in the article? Any API must be interpreted by the calling application – i.e. the caller must know what they are calling and what responses they might get. Otherwise they cannot do anything useful with it – at least not in an automatic (programmatic) fashion.
I really don't understand how something can be a REST API on the one hand (clear, well-documented interface; used for programming), and on the other hand is supposed to be "for humans" and devoid of "interpretation" on the client's part. (Leaving aside that, even if this were possible, the interpretation would simply be done by the very final client of the API: The human.)
All in all, I simply fail to see how ideas like "REST is for humans", HATEOAS etc. are supposed to be actionable in the real world.
What you and the parent see REST as, should be called an HPAI: “human-poking-around interface”.
I don’t think hypermedia is only for humans.
You can totally do REST for computers. You’re just supposed to divide knowledge along Content-Type boundaries.
It’s true people mostly don’t do this, but it works great when people bother to describe rich Content-Types.
If it's your stance that an interface designed to be interpreted by a program cannot be RESTful, then you could just shorten your rant to 'REST APIs cannot exist by definition'. It would save time. It's fair enough to be annoyed by words changing meaning I suppose.
It also seems like your RESTful system definition would include any server serving up straight html without client side scripting.
The client "knows nothing about the meaning of the responses" only inasmuch as it intentionally abstracts away from that meaning to the extent practicable for the intended application. Of course, the requirements of a human looking at some data are not the same as those of a machine using that same data in some automated workflow.
Linked Data standards (usable even within JSON via JSON-LD) have been developed to enable a "hyperlink"-focused approach to these concerns, so there's plenty of overlap with REST principles.
This means that if "/api/item/search?filter=xxxxx" returns an array of ids then you don't have to guess that item prices can be fetched by "/api/item/price?id=nnn" but this url (maybe in template form) needs to be provided by either the "/api/item/search?filter=xxxxx" query or another call you have previously executed.
So very similarly to how you click on links on a website. You often have a priori knowledge of the semantic of the website, but you visit the settings page by clicking of the settings link, not by manually going to "website.example/settings".
PS: these links could be provided by a separate endpoint, but this structure is often useful for things like pagination: instead of manually incrementing offsets each paginated reply can include links for next/previous pages and other relevant links. These need not be full URLs also relative URLs, just URL query fragments, or a JSON description of the query would work (together with a template URL from somewhere else)
That said, the situation isn't entirely dire. With some standard linking conventions (e.g RFC 8288 [1]), you can largely make an API that is pleasant to interact with in code as well. That the links/actions are enumerated is good for humans to learn how to manipulate the resource. That they have persistent rels is good for telling a computer how to do the same.
Think <link rel="stylesheet" href="foo"> as an example. A human reading the HTML source will see that there's a related stylesheet available at "foo". But a program wanting to render HTML will check for the existence of links with rel="stylesheet".
1: https://datatracker.ietf.org/doc/html/rfc8288#section-2.1.2
You’d still need some generalized format for the client to get some form of input schema, and if you send the input schema for every action every time you retrieve a resource, things quickly become very data intensive.
No, it doesn't. It's a list of URLs, which doesn't even indicate what operations they accept. The only thing REST supports, according to this philosophy, is a user manually traipsing through the "API" by clicking on stuff.
Thanks for summing it up so succinctly; I thought maybe I was missing something.
Old web 1.0 applications, however, let you do a lot more than traipse through an API by clicking on stuff: you can send emails, update documents and on an on, all through links and forms. The HTML, in this system, does "carry along" all the API information necessary for the client (the browser) to interact with the resources it is representing. It relies on the human user to select exactly which actions to take, which is why I say "HATEOAS is for Humans", and wrote an article on exactly that.
HTTP does not require a particular media type's representation to indicate the allowed methods. What made you think this should be the case? All popular media types I know either contain a partial indication (e.g. forms in HTML), and otherwise a fallback to GET/HEAD as default is implied or specified.
If a client is unsure, there's always the OPTIONS method and the Allow header.
They only read state but never modify it. So it misses the whole point of interaction with a web resource.
The HATEOAS model is designed for one thing: clients and servers that are developed independently of each other. This matches how the Web is designed (browsers and servers are not developed together), but does not match how most Web Apps are developed (there is almost universally a single entity controlling both the server and the client(s) for that app).
The point of HATEOAS-style REST APIs is that the client should decide what it can do based entirely on the responses it receives from the server and its understanding of the data model - not in any way on its own knowledge of what the server may be doing. This allows both to evolve separately, while still being able to communicate.
To contrast the two approaches, let's say we are building a clone of HN. In the more common web app approach, our client may work like this:
1. Send a POST to https://auth.hnclone.com/login with {"username": "user", "password": "pass"}; wait for a 200 OK and a cookie
2. Send a GET request to https://hnclone.com/threads?id=user (including the cookie) and display the results to the user
In the REST approach, our client and server could work like this: 1. Send a GET to https://hnclone.com; expect a body that contains {"links": [{"rel": "threads", "href": "https://hnclone.com/threads", "query_params": [{"name":"id", "role": "username"}]}]}
2. Send a GET request to the URL for rel="threads", populating the query param with role="username" with the stored username -> get a 401 NOT AUTHORIZED response with a body like {"links": [{"rel": "auth_page", "href": "https://auth.hnclone.com"}]}
3. Send a GET request to the URL for auth_page and expect a response that contains {"links": [{"rel": "login", "href": "https://auth.hnclone.com/login"}]}
4. Send a POST request to the link with rel == "login" with a body of {"username": "user", "password": "pass"}, expecting a 200 OK response and a cookie
5. Re-send the request to the URL for rel="threads" with the extra cookies, and now get the threads you want to show
More complicated, but the client and server can now evolve more independently - they only have to agree on the meanings of the documents they exchange. The server could move its authentication to https://hnclone.com/api/auth and the client from 2 years ago would still work. The server could add (optional) support for OAUTH without breaking old clients.You could even go further and define custom media formats and implement media format negotiation between client and server - the JSON format I described could be an explicit media format, and your API could evolve by adding new versions, relying on client Accept headers to decide which version to send.
Now, is this extra complexity (and slowness!) worthwhile to your use case? This depends greatly. For a great many apps, it's probably not. For some, it probably is. It has definitely proven to be extremely useful for building web browsers that can automatically talk to any HTTP server out there without custom Google/Facebook/Reddit/etc. plugins to work properly.
The kinds of changes people are interested in are usually related to changing URLs, and those don't seem very valuable to me for the amount of complexity this flow adds. And ironically enough, the "URL changing" part we already have covered in the old system fine, with HTTP Redirect messages.
There's nothing wrong in this article, in the sense that everything's correct and right. But it is an old person's battle (figuratively, no offense to the author intended, I'm that old person sometimes).
It would be like your grandparents correcting today's authors on their grammar. You may be right historically and normatively, but everyone does and says differently and, in real life, usage prevails over norms.
Same goes for REST.
That goes for technology, words, political concepts, music...
We are instructed to think that progress is a straight line. Without any surprise, it is not. But I like to picture it with a direction nonetheless... upwards. So as it's not a straight line, I see it as a pulse, with its ups and downs, the ups usually being mild and the lows potentially dramatically sharp.
But still somehow going up.
---
On the current and future challenges: I am mortified about what we did and are doing to the planet and horrified to witness the political tensions in Europe (not only the war, because Europe has seen Crimea, the Balkans, and war never stopped after 1945, but now, there are severe risks of expansion). Also, I do not believe in tech as a means to solve our problems, never did, never will.
So maybe my tiny optimistic pulse trending upwards is too candid and naive but at the moment, maybe blind with despair, I hold on to this idea.
Not only that, people tend to compare in word and concept from the newer song to the older song! “Hey wow this song from 19xx sounds just like this new song I love”
No you fool, your new song sounds like the previous one :P. Causality matters!
Now the new generation of devs is re-learning the same lesson, all over again. It's all fun when you are in your 20s and the time and energy seems to be unlimited. Then you get a life, and you really start valuing a simple and working design, where everything seems under control, and your codebase is not on fire all the time.
This is why we got into distributed systems as last resort. It was a difficult solution for a difficult problem. For those that don't know, "distributed systems" is what you call "microservices" now, and 97% of the companies don't need to go anywhere near them.
At least to me it feels like it.
And yet... right or wrong, something substantial is lost when "literally" fades into a figurative intensifier.
Same goes for REST.
There's little immediate problem in the misappropriation of the term. The problem is how few engineers spend anything more than literally zero time actually thinking about the potential merits of the original idea (and quite possibly less). Not even talking about the people who spend time thinking about it and reject it (whatever points of contention I might raise on that front), just the sheer volume of devs who sleepwalk through API engagement or design unaware of the considerations. That's the casualty to be concerned about.
But personally, I think I have never seen any actual REST that wasn't just browser-oriented HTML. So using the word for the API pattern is quite ok.
Yes, but wisdom is to avoid to conflate "lost" for "degraded".
Us older guys have to do the opposite. As we see these things come and go, we get jaded and start to dismiss new techniques as fads. I shudder to think of how much wasted effort I put into "Enterprise JavaBeans" and "XSL Transforms". Years later, I took a look at React when it first launched, dismissed it as crap because of the mess it made in the DOM back then, and then ignored it. It took me a few years until I realized I was wrong and it was going to stick around.
Trends and fads can look pretty similar in the early days, and trends often look bad early on too as they often take longer to mature than a fad. The trick is in spotting things that appear to be a bad fad, but will eventually be a good trend.
Like you I saw the wranglings over this meaning. And today I look at the documentation and see HTTP+JSON RPC and I still FEEL "that's not REST" but whatever.
The issue is getting everyone to agree with your new word or to even recognize the problems of semantics.
Many people also deliberately misuse the existing terms to get advantages. For example, DevOps in your title gives you a higher pay despite often being materially the same as a pure operations role or sys admin.
Does it delight you?
A day may come when the courage of old web developers fails,
when we forsake our old, RESTful network architecture
and break all bonds of HATEOAS
but it is not this day
javascript fatigue:
longing for a hypertext
already in hand
// haiku FTAJust call it adhoc RPC with JSON over HTTP.
The REST interface should be self describing, but that can be done in JSON.
If you go to Roy Fielding's post... there is a comment where someone asks for clarification, and he responds:
> When I say hypertext, I mean the simultaneous presentation of information and controls such that the information becomes the affordance through which the user (or automaton) obtains choices and selects actions. Hypermedia is just an expansion on what text means to include temporal anchors within a media stream; most researchers have dropped the distinction.
> Hypertext does not need to be HTML on a browser. Machines can follow links when they understand the data format and relationship types.
So, to me, a proper format is something like...
id: 1234
url: http://.../1234
name: foo
department: http://.../department/5555
projects: http://.../projects/?user_id=1234
This is hypertext in the sense that I can jump around (even in a browser that renders the urls clickable) to other resources related to this resource.
No one seems to listen to the JSON inventor, who said he regrets creating a misnomer name and no successor should use the same naming parts, since it neither dependent on or compatible with JavaScript, nor was it only useful for storing and/or describing objects. (I am paraphrasing from my memory on his reasoning on both points.)
Open API 3 solved that problem for me, transforming JSON-RPC into a documentented API.
I'm using HTML as an example to demonstrate the uniform interface constraint of REST, and showing how this particular JSON (and, I claim with confidence, most extent JSON APIs) do not have a uniform interface. Which is fine: the uniform interface hasn't turned out to be as useful when it is consumed by code rather than humans.
There are good examples of successful non-HTML hypermedias. One of my favorites is hyperview, a mobile hypermedia:
Hyperview isn't that interesting because it's a non-standard proprietary technology that isn't really "on the web". So you either have something like that, an actual full-featured HTML browser, or have something consuming a fully defined JSON API. It doesn't feel like there's anything interesting about non-HTML user agents on the web. HTML automatically makes them all irrelevant.
a) just denormalizing the field contents if you know what the client needs them for
b) supporting GraphQL if you want to support a general-purpose query endpoint
It mandates discoverability of resources, but no sane client will go around and request random server-provided urls to discover what is available.
On the other hand, it does not provide means to describe semantics of the resource properties, nor its data type or structure. So the client must have knowledge on the resources structure beforehand.
Under HATEOAS the client would need to associate the knowledge of resource structure with a particular resource received. A promising identifier for this association would be the resource collection path, i.e. the URL.
If the client needs to know the URLs, why have them in the response?
Other problems include creating new resource - how the client is supposed to know the structure of to-be created resource, if there is none yet? The client has nothing to request to discover the resource structure and associations.
Also hypertext does not map well to JSON. In JSON you can not differenciate between data and metadata (i.e. links to other resources). To accomodate, you need to wrap or nest the real data to make side-room for metadata. Then you get ugly and hard to work with JSON responses. It maps pretty good to XML (i.e. metadata attributes or metadata namespace), but understandably nobody wants to work with XML.
And the list goes on and on.
- the API is then tied to single "transport" protocol (it is application layer protocol in ISO/OSI but if you are not building a web browser, your application should reside one layer upwards) - it crosses ISO/OSI layer boundaries (exposes URLs in data, uses status codes for application error reporting, uses HTTP headers for pagination etc.)
I think the second issue is vastly underrated. Protocols that cross layer boundaries are source of trouble and require tons of workarounds. Do you remember how FTP does not work well with NATs? It's because it exposes IPs and ports - transport layer concepts on application layer. SIP? The same thing.
With true REST you can build only HTTP APIs, no websockets, no CLIs, no native libraries.
That's, uh, the point. Without that, it's not "the web." (And yes, properly-structured APIs are part of "the web" — e.g. the first page of your paginated resource can be on one server, while successive pages are on some other archival server.) This is the whole reason for doing HATEOAS: there's no longer a static assumption that you're working against some specific server with a specific schema structure in place. Rather, your API client is surfing the server, just like you're surfing HN right now.
> no websockets
Correct in a nominal sense, but not in any sense 99% of developers would care about. Instead of RPC over websockets, do REST. Instead of subscriptions over websockets, do REST against a subscription resource for the control-plane, and then GET a Server-Sent Events stream for the data-plane. Perfectly HATEOAS compliant — nobody ever said resource-representations have to be finite.
And realistically how often are you going to change your "transport"? And if you added an abstraction layer would that actually make it any easier? Stuff like SOAP ends up being the inner-platform effect where you reimplement all of HTTP on top of HTTP and actually implementing a new SOAP transport is just as hard as porting your protocol use a second "transport" if you actually needed to (which you probably won't).
Yup, wouldn't it be nice to have some sort of standardized framework to describe those resources? You could perhaps call it a Resource Description Framework, or RDF if you like acronyms.
"Random" isn't what's supposed to happen. You hit a top level endpoint, and then at that point other endpoints are made manifest, and then the UA/client and the user decide together what the next relevant endpoint is.
And this is what happens all the time with the most common client (the browser). Seems to have worked more or less for 30 years.
As for what semantics the UA/client is capable of exploring and providing assistance with: who knows what's possible with additional heuristics + machine learning techniques?
> Also hypertext does not map well to JSON... It maps pretty good to XML... understandably nobody wants to work with XML.
I don't understand that, actually. Markup is underrated for data exchange potential these days. JSON is frequently (though not always) somewhat lighter on bandwidth and often briefer to type for the same underlying reasons, but beyond that there's no inherent advantage. It just became the easiest serialization/deserialization story, half out of riding the way that JS won, half out of what it didn't bother to try doing (a lot of the meta/semantics) and so gave devs permission to stop thinking about.
That's not how APIs are used. APIs consume and provide data. Raw data is unsuitable to be presented to the user. That's why HTML has so many formatting options. Formatting information is completely missing from APIs.
> Seems to have worked more or less for 30 years.
Yes, worked for good old web. In this sense, true REST is nothing new and even seems backwards. If we try to do REST while keeping data and presentation separate, we will come to something very similar to XML for data + XSLT for formatting. Or XForms. Old ideas all over again.
> I don't understand that, actually. Markup is underrated for data exchange potential these days.
XML/markup does not map well to basic data types in current programming languages. These work with strings, ints, floats and arrays/dictionaries thereof. Not Nodes, Elements and attributes of unknown/variant data types.
Right. Any technology that works this way is basically a "browser". You could create a new markup language or data format and a new user agent to consume it. But you'd be re-inventing the wheel.
There may be some use case for that, as opposed to software clients consuming a well defined API, but I haven't seen it yet. The HTML web browser basically depreciated all other browser-like Internet technologies when it came out (remember Gopher) and is even replacing actual desktop software clients. There's no market for alternative hypermedia clients so why are we giving this so much thought.
Compare and contrast: what SQL admin GUI clients do to discover the DBMS schema. They essentially spider it.
> Under HATEOAS the client would need to associate the knowledge of resource structure with a particular resource received. A promising identifier for this association would be the resource collection path, i.e. the URL. If the client needs to know the URLs, why have them in the response?
The client does not need to know the URL; the client needs to know how to get to the URL.
Have you ever written a web scraper, for a site that doesn't really enjoy being scraped, and so uses stuff like CSRF protection "synchronizer tokens"? To do/get anything on such a site, you can't just directly go to the thing; you have to start where a real browser would start, and click links / submit forms like a browser would do, to get there.
HATEOAS is the idea that APIs should also work that way: "clicking links" and "submitting forms" from some well-known root resource to get to the API you know about.
As with a scraper, it's the "link text" or the "form field names" — the delivered API — that you're assuming to be stable; not the URL structure.
> Other problems include creating new resource - how the client is supposed to know the structure of to-be created resource, if there is none yet? The client has nothing to request to discover the resource structure and associations.
What do you think HTML forms are, if not descriptions of resource templates? When you GET /foos, why do you think the browser result is a form? It's a description of how to make a foo (and, through the submit attribute, a place to send it to get it made.)
Alternately, compare/contrast what Kubernetes does — exposes its resource schemas as documents, living in the same resource-hierarchy as the documents themselves live.
> Also hypertext does not map well to JSON. In JSON you can not differenciate between data and metadata (i.e. links to other resources)
It's right in the name: in HATEOAS, hypertext is the engine of application state. Hypertext as in, say, HTML. JSON is not hypertext, because it's not text — it's not a markup language like SGML/XML/HTML are.
You realize that HTML is machine-readable, right? That you can respond to an XHR with HTML, and then parse it using the browser (or API client's) HTML parsing capabilities, just like you can receive and parse JSON? That there's nothing stopping you from using a <table> to send tabular data, etc.? And that if you do so, then debugging your API becomes as simple as just visiting the same link the API client is visiting, and seeing the data rendered as HTML in your browser?
>Compare and contrast: what SQL admin GUI clients do to discover the DBMS schema. They essentially spider it.
Not really, there is information_shema where they get everything they need to know about structure separately from data.
> Have you ever written a web scraper, for a site that doesn't really enjoy being scraped, and so uses stuff like CSRF protection "synchronizer tokens"?
Yes. Awful. Do we want all APIs to be like that? Why?
> It's right in the name: in HATEOAS, hypertext is the engine of application state. Hypertext as in, say, HTML.
Fully agree on this one. Just that HTML is unsuitable for machine-to-machine communication, so it is not used for APIs.
I've been in this industry two decades, but it's only in the past 5 years that I've noticed entire teams of absolute morons entering the field and being given six figure jobs without understanding what their job even is, much less how to do it properly. (And by "properly" I mean "know what a REST API is")
The industry is awash in quants who took a four week course in Python and are now called "data scientists"; backend software engineers who don't know what the fuck a 500 error is; senior developers who graduated a year ago; engineering managers who "hire DevOps"; product managers and DMs who don't know how to use Jira or run a stand-up. It's like all that's left is people who think they have "impostor syndrome" when they actually are impostors of professionals.
I tried to find a new job recently, and I couldn't find a single org with at least 50% of the staff properly understanding how to do their jobs. Of course half of them were bullshit VC-funded money-bleeding terrible businesses, and the other half were fat cash cows that through their industry dominance became lazy and stupid. Maybe we just hit peak tech, and all the good teams were formed by boring companies long ago and don't have new positions open. Or maybe all the good people cashed out and retired.
Unfortunately the name REST was too awesome sounding and short - so we've never had a fork with a different name that has proclaimed itself as something more accurate.
I don't think this is awful, FYI - it's sort of the evolution of tech... the OG REST wouldn't have ever gotten popular due to how stringent it was and I can use "That it's RESTful enough." to reject bad code practices without anyone being able to call me out on it because nobody actually remembers what REST was supposed to be.
I'd also add - what precisely is self-descriptive HTML? All descriptions need reference points and nothing will be universally comprehensible - let alone to a machine... expecting a machine to understand what "Insert tendies for stonks." on an endpoint is unreasonable.
I’ve been writing web services for over a decade and this just seems like a cute idea that is actually almost never at all useful in the real world.
Turns out, a lot of places don't even want self-discovery. We shut off our openapi.json in prod because a) security b) we don't want bots messing around c) Hyrum's law, as soon as you expose an API endpoint, folks try to build against it and grumble if it chances/goes away.
Wikipedia (along with any true wiki) is, and likely forever will be, the one true REST/HATEOAS application.
And that even has omitted some fields you’d get back querying manually.
I had a look at the example response to see whether that is true. I give you that there are hyperlinks existing in resources of media type `application/vnd.github+json`, but there is no uniform interface to discover a link. Fielding would disapprove. It's an indication of bad design that a client must hardcode e.g. `labels_url` instead of having a generic reusable way to access a link that works across a multitude of JSON derived media types.
I'm super cognizant this entire discussion hinges upon semantics, nuance, etc but the Okta API isn't terrible about that
https://developer.okta.com/docs/reference/core-okta-api/#hyp...
I haven't personally tried `curl https://${yourOktaDomain}/api/v1` to see what the bootstrapping process looks like, but I can attest they do return _links in a bunch of their resource responses, and I believe their own SDK uses them, so they're not just lip service
Not even remotely. All portions of rest were jettisoned, and the nice branding got slapped on familiar rpc.
Fielding’s rest was never about http-specific concepts, not the verbs, not the status code, and not cute-looking URLs.
- stateless - cacheable - follow client-server architecture - support resource identification in requests - loosely support resource manipulation through representation (with some caveats)
I don't see how it's RPC by any but the most broad interpretation (a function executes at another address space).
I read this like you're convincing yourself vs the reader.
The history did its job: it preserved the most useful features of the original idea (expressing RPCs as URLs in GET and POST requests) and has dropped the unnecessarily complicated bits.
What this article is about is a pedantic terminology battle of whether to call the current practice REST or not.
you're describing what a browser does.
> has dropped the unnecessarily complicated bits.
you're viewing this content from a browser.
In other words, HTML is just the standard format that the browser uses. But if you use frameworks or define your own application specific formats, you can send less information over the wire and have it properly decoded and displayed by your client-side javascript.
After doing this for 15+ years, I tell my junior developers to take it easy on the "proper way" of doing things. It will change, people will argue, and money talks.
Writing software is such a _human_ thing. It has so much more in common with writing than it does with other kinds of engineering.
Most of what we're doing has to do with how to lay things out so that it's clear and easy for other humans (including the humans that write code) to understand, interact with, and modify. Any time you're dealing with humans brains, there's going to be a lot of complex subtlety in terms of what the "best" approach is.
But because it's software "engineering", people think we need to have fairly hard-and-fast rules about the right way to do everything.
<html>
<body>
<div>Account number: 12345</div>
<div>Balance: $100.00 USD</div>
<div>Links:
<a href="/accounts/12345/deposits">deposits</a>
<a href="/accounts/12345/withdrawals">withdrawals</a>
<a href="/accounts/12345/transfers">transfers</a>
<a href="/accounts/12345/close-requests">close-requests</a>
</div>
<body>
</html>
I can navigate to that page and because I know English can follow links to my withdrawls and deposits. A computer can't. The client program needs to have an understanding of withdrawl and deposit in order to function. The only way to do that involves coupling the client to the server.Rest never denied that coupling, it defined that coupling at the content-type level.
* Localization * Accessibility * Different clients * Different presentations for different contexts.
The whole premise of RESTful hyper-media driven APIs described in this article is predicated on "The One" client talking to the server. Our modern world is not this.
A hobbyist / small company doesn't need to have RESTful APIs. The whole point is to design them so that they play well with others, and when you get to that point, you (or more likely the people who depend on you) will wish you had.
As soon as you have a third party using your API things get another layer of conplexity: do you charge them? Do you rate limit them? if you have several partners, how do you authenticate them? etc.
API gateway solve some of that, and sometimes you dont care, but generally its not as sinple as goving your internal API to people and telling them to go wild
It's a measure of decoupling I think. If your client started out with no knowledge of the server and still managed to work, then it will still work even when the server is upgraded, restructured, etc etc.
Of course, having every client just started at the root URL and then navigate its way to what it needs by effectively spidering the server just aint practical in any meaningful sense. But in small ways and critical places it is still possible to follow this pattern, and to the extent you do, in return you get a level of decoupling that is a useful property to have.
The P in API is programmer’s. Specifically it is a programattic call.
REST says you get hyperlinks, which are effectively documentation in the response.
Which is nice.
But a program isn’t a person it doesn’t need docs in the response.
And URL links are not sufficient documentation to use the interface.
So I don’t get the REST use case outside of some university AI project where your program might try to “make sense”’of the API.
And therefore I have never tried to use REST and I have never seen anyone else either at anywhere I have worked.
It is a nonsense concept to me.
REST API is a contradiction in terms.
REST is a post-hoc description of how the web worked (at the time it was made).
You had web pages with hyptertext content, and that included forms. The forms had fields with names and an "action" destination.
The client (the browser) knows nothing about the server or even its APIs. It just knows how to send an HTTP request with parameters. In the case of forms, those parameters were encoded in the body of a POST request. That's it.
There was no "client side code" that talked to the server.
The "client side" is literally just the browser. Talking to the server is done by the user clicking links and filling forms.
I don't think the article is particularly encouraing you to program this way in 2022. Just telling you that if you are not programming in this way, do not call what you are doing "REST", because it is not.
Aka somebody pulled some ancient obscure definition out of nowhere that just means _everybody_ is wrong.
whether it is a local function or a remote function, both caller and callee need to agree on the parameters (input), and returns (output).
I send you X. You send me back Y. That's it - this is the contract we both agree to.
OP is saying - the caller should NEVER do anything with Y other than display it on screen, for it to be called REST. Well - why even display it, why not just discard it ? Calling print(Y) is as good as calling businesslogic(Y). Whatever further logic a human plans to do after print(Y), a machine can do the same.
In other words, REST is just step 1 of returning data from a remote function. The moment you code any additional logic on the returned data (which is 99% of use cases), it's not REST anymore ? Sounds like an extremely limited definition /use case of REST.
A standard set of methods—with agreed upon semantics—is a huge architectural advantage over arbitrary “kingdom of nouns” RPC.
I’d argue that by the time your API is consistently using URLs for all resources and HTTP verbs correctly for all methods to manipulate those resources, you’ve achieved tremendous gains over an RPC model even without HATEOAS.
- The Kingdom of Nouns comparison is forced. Yegge's complaint is that nouns own the verbs, meaning Java doesn't have first-class functions. The closest remote analog might be promise-pipelining which doesn't have much headway other than a single implementation for Cap-n-proto.
- RPC APIs are more consistent than an HTTP API. With HTTP, a unique method requires both a path and method, and if you're really unlucky, the method is polymorphic based on the contents of the request body.
- HTTP API requests can transport in different ways: request body, query parameters, HTTP path, and if you're really unlucky, headers.
Tremendous gains doesn't match my experience. The first step in using a HTTP API is to wrap it with an OpenAPI generator to build a consistent way to invoke the API, reinventing RPC client stubs in the process.
Having worked in an organization where people were very familiar with the academic definition of REST, the biggest benefit of being a backend developer was that when client-side folks depended on nonRESTful behavior, we had some authority to back that claim. It gave us leeway in making some optimizations we couldn't have made otherwise, and we got to stick to RFCs to resolve many disputes rather than use organizational power to force someone to break compliance to standards. I suppose it meant that we were often free to bikeshed other aspects of design instead.
Edit: See how this post has zoomed to hundreds of comments in just minutes by people arguing the 'one true REST'. The situation is insufferable.
As long as there’s an OpenAPI spec, sane API routes, and it uses a format that’s easily consumable with a given ecosystem (so pretty much always JSON anyway), and it doesn’t do anything dumb like return 200OK { error: true } then I’m happy with it. Too much bikeshedding.
Bonus points if the API has a client library also.
Is there a reason that I'm just not aware of? a throwback to SOAP?
Especially if you have metrics/alerts that are tracking status codes.
I mentioned that it was frustrating to work with APIs like that since a bunch of tooling relies on status codes, including the browser network tab but they just told me they didn't care. They also made a bunch of other questionable design decisions before leaving, so now I just take it for was it is, a red flag.
I fully understand the movement of complexity, historically belonging on the server, shifted just as much on to the client, but that's not a discussion of being restful - it's not restful to have the client determine application state so to speak - the server does that for you.
In the sense of catering multiple clients via an API is of tremendous value but you still moved the complexity on to the client - you cannot argue against that.
I find it fascinating to have at least blobs of state/representation being served without having to fiddle with inner workings of an API and simply rely on the API to give me what I must show the user.
I am in the HATEOES camp, it sit well with me. But that's just me
Related, it seems like "API" is quickly becoming synonymous with "web service API". In my experience, the thing that goes between two classes is almost always referred to as an "interface" only.
In interviews, I would ask, "What makes an API RESTful?" and wait gleefully for them to stutter and stumble towards an answer.
I would accept any kind of answer, and it was only really a mark against you if you couldn't dredge up something about "resources" or "HTTP verbs," or even just express some kind of awareness that there were other kinds of API.
It wasn't unusual for someone to just have no clue.
Maybe that makes me a grammar nazi or a*, and maybe adding that was just a way for kids with one internship under their belt to pad their experience, but I always felt like you should know the words on your resume.
I guess now that I know about this "hypermedia" requirement, I should be a little more forgiving?
The best I can come up with (and this is me trying to like it) is that I guess the API is somewhat self documenting?
I see benefits to resource orientation and statelessness, but why do people get so upset about these APIs not following HATEOAS? Is it just a form of pedantry, that it's not really a REST API, it's a sparkling JSON-RPC?
Everyone wants to be RESTful. RESTful is chill. It's resting - good programmers are lazy! But RESTful is resting while using an acronym, which is technical and sophisticated. To be RESTful is to be one of the smart lazy ones.
Now if you're one of the few who cares what your acronyms mean, you look it up and ... "representational state transfer". How do you transfer state without representing it? I guess everything that transfers state is RESTful. And everything is state, so everything that transfers is RESTful. So every API is RESTful! Great, I guess if we make an API we're one of those cool smart and lazy people. And let's make sure to call it RESTful so that everyone knows how cool, smart, and lazy-yet-technical we are.
Roy Fielding made a meaningless but cool-sounding acronym popular and has reaped the predictable consequences.
whether it is a local function or a remote function, both caller and callee need to agree on the parameters (input), and returns (output).
I send you X. You send me back Y. That's it - this is the contract we both agree to.
OP is saying - the caller should NEVER do anything with Y other than display it on screen, for it to be called REST. Well - why even display it, why not just discard it ? Calling print(Y) is as good as calling businesslogic(Y). Whatever further logic a human plans to do after print(Y), a machine can do the same.
In other words, REST is just step 1 of returning data from a remote function. The moment you code any additional logic on the returned data (which is 99% of use cases), it's not REST anymore ? Sounds like an extremely limited definition /use case of REST.
It must be said that confidence with which you present your inaccurate assertions is only going to make others as confused as you are.
1. A term to indicate APIs that use HTTP as the transport protocol, and typically JSON as representation.
2. (archaic) A term conied in a paper from 2000, indicating a model that describes how the internet works.
Same goes for OOP. /s
It turns out that using JSON is easy, has good support and is relatively compact on the wire.
It also turns out that using HTTP verbs and transferring the entire state of an object makes development easier.
And equally, for 99% of use cases, it turns out that HATEOAS is nice but not necessary.
Is the author correct in that APIs are inaccurately calling themselves RESTful? Yes, yes they are very correct. Congratulations. Here's a trophy for being correct. Now let's focus on what matters, and that is building software that works and works well, REST or not.
Please dump the pedantry and focus on practicality.
HTML5 was only just released in 2014, and took many years to be fully supported by major browsers.
At the time of JSON vs HTML, HTML was not yet in a standard place yet (XML API implementations were extremely inconsistent).
Fast forward to 2022, fetching <div> and <a> is an elegant pattern, and probably the way to go for self documenting API in the future!
Second, a JSON response is simply that: a bunch of data in JSON format. It is not JSON-RPC. JSON-RPC, unlike REST, is a protocol -- a way a client talks to a server -- and it usually looks like this:
--> {"jsonrpc": "2.0", "method": "subtract", "params": [42, 23], "id": 1}
<-- {"jsonrpc": "2.0", "result": 19, "id": 1}
--> {"jsonrpc": "2.0", "method": "subtract", "params": [23, 42], "id": 2}
<-- {"jsonrpc": "2.0", "result": -19, "id": 2}
XML-RPC is the same thing, but done with XML instead of JSON.> The entire networking model of the application moved over to the JSON RPC style.
No, it didn't. Well, actually, I don't know exactly what "networking model" means in this case. Pretty sure we're still using TCP/IP. But I think it means the data-layer protocol; this is, however, still wrong. We're actually using HTTP methods (also known as RESTful verbs[2]) along with a JSON data format. This is still quite screwed up in the grand scheme of things, but in different ways than the article argues.
[1] According to the man himself, in fact: https://restfulapi.net/
[2] A terribly confusing term
It seems that trying to build a hypermedia API in the spirit of hypermedia precludes someone actually designing an app with any particularity (themes, layouts, pages, certain requests/responses being valid/invalid, etc.), since it must be so general, the only client application that could qualify is the web browser itself - able to render any HTML document without actually knowing about the document semantically. Because having prior semantic knowledge of the API violates REST. Assuming that, are the 'apps' and APIs one would design then just hypermedia documents? Sounds like web 1.0. Not necessarily a bad thing, really, but REST seems too specific to be meaningful outside of websites, either that or I'm not imaginative enough.
I'm not sure what a 'hypermedia API' looks like that isn't just a web page or a functional equivalent thereof. It seems that either the WWW is the REST reference implementation or REST is simply the architecture of the WWW codified.
Perhaps modern use of the term REST is officially incorrect, but I think most REST clients really need to understand what they are receiving. How many rest clients merely show the result (as-is) to the human user? No, most clients are themselves programs which need to consume the response and make further decisions.
Imaging having to parse out the official REST HTML response to get the balance of the account. I hope the source is only in one language, because I would hate to have to build my own reverse-localization system just to make sense of the REST response I just consumed.
I was really trying to grasp why someone would build such a tall soapbox to complain about the incorrect use of a term, when the correct use would mean building arguably near-useless APIs. But then I took a look at what the htmx site is all about. It's about everything-as-html. "Note that when you are using htmx, on the server side you typically respond with HTML, not JSON. This keeps you firmly within the original web programming model, using Hypertext As The Engine Of Application State without even needing to really understand that concept."
Looking at the rest of their site, I'm finding it very difficult to see the value proposition over the current approach of JSON APIs.
> Imaging having to parse out the official REST HTML response to get the balance of the account.
It's not the HTML that matters, it could be any self describing format containing hypermedia controls, for example: https://jasonette.com or https://hyperview.org
Many web APIs are not REST, but still take at least a tiny bit of inspiration from it. Mostly the resource-based structure, not so much any of the other stuff like HATOAS. In practice the self-describing nature simply isn't useful enough, so most people don't bother.
I'm not so sure. For one thing, it's of both theoretical and practical interest to trace the path of how a technical term comes to mean its opposite over time. If you're in the business of creating technical terms (everyone building technologies is), you might learn something by studying the REST story.
For one thing, Fielding's writing is not exactly approachable. REST is described in a PhD dissertation that is dense, packed with jargon and footnotes, and almost devoid of graphics or examples. His scarce later writings on REST were not much better.
Others who thought they understood Fielding, but who could write/speak better than him, came along with different ideas. Their ideas stuck and Fielding's didn't because he wrote like an academic and they did not.
The other thing that happened is that the technological ground shifted. To even begin to understand Fielding requires forgetting much or all of what one knows about modern web technologies. Part of that shift is the timing of Fielding's rediscovery with deep frustration over XML/RPC.
And I'd like to clarity that never did I mean that the knowledge and history fueling this so-called battle was meant to the trash.
Quite the opposite actually. As a self-described old person, I much appreciate the historical perspective and the subtleties and the changes the term has seen.
So a format intended for machine-to-machine communication is taking on huge cost adopting a full hypermedia format for its output. Ignoring the initial question of "What version of hypermedia" (i.e. are we doing full modern HTML? Can I embed JavaScript in this response and expect the client to interpret it?), that's just overkill when 99% of the time the client and the server both understand the format of the data and don't need the supporting infrastructure a full hypermedia REST implementation provides.
For the same reasons XML-RPC more-or-less lost the fight, HTML (as a not-very-lightweight subset of XML) was going to lose the fight.
That having been said, there are some great ideas from the REST approach that can make a lot of sense with a JSON payload (such as standardizing on URLs as the way foreign resources are accessed, so your client doesn't have to interpret different references to other resources in different ways). But using HTML-on-the-wire isn't generally one of them; it's a solution looking for a problem that brings a full flotilla of problems with it.
It is just not a practical architecture for API’s.
As to the rest (pun) of the article… I have no problem accepting that REST was originally proposed as a way to navigate the web using hypermedia responses. But I also have no problem in accepting that the term has since moved on to describe the API design principles which ultimately what makes it useful for the modern web.
Funnily enough, recent interest in SSR almost makes it a full circle.
A new name is needed for Classic REST. The use of HATEOAS is ugly because it has HATE in the name. Hypermedia Constraint REST is better. Stateless REST. Pure REST. Classic REST. Separation of Concerns REST. Rest 1.0. Hypermedia REST vs JSON REST
It's plumbing.
Some time in the future there will be another level of the software revolution in which alot of those details can be left to the computers themselves to work out.
In fact, your plumbing analogy is more correct than you think. Most devs these days are connecting existing pieces together to make a system flow. We don't get paid to make the pieces, we get paid because we know how they should fit together.
What bugs me most in this is that ceremonial part. “Pass parameters in urlencoded form, except when GET, then put them into query string”. Wtf? We are clearly doing RPC, not a clerical work. Some APIs may look like documents, e.g. stock market personal order book looks like a collection of signed legal documents, but others do not, e.g. ticker stream, weather info, etc. Can we please stop stretching buzzwords and just settle on RPC, which could be abstracted away into `result = await resource.foo(bar, baz)` instead of processing numerous structured outcomes from network failures to operational errors which have no corresponding http status codes, unless you stretch to the one that sounds similar.
> Today, when someone uses the term REST, they are nearly always discussing a JSON-based API using HTTP.
Yup, this is exactly what I do. So what? Maybe that's incorrect naming but most people only care about being able to easily use the API, not whether it is true REST. And not being 100% REST-spirit-compliant does not prevent from using tools like OpenAPI to document it.
The links in a JSON response are only applicable if you need a client to be able to explore from the response on, but in practice it's not necessary and you're better off saving the response overhead.
A high over design like an OpenAPI spec is better in all the cases I've seen. And of course there's alternatives like GraphQL or grpc depending on the use case. I'd still prefer REST for public APIs though.
https://www.google.com/search?q=top+words+that+have+changed+...
One I hate is "rougelike". I doesn't mean "like the game Rouge" (which might include Diablo and certainly includes Larn) Instead it now means any game with randomly generated levels but requires no other similarities to Rouge.
It makes sense as it allows implementation of seamless API across multiple servers and removes need to make consistent URL structure. But won't it add too much overhead then?
The basic idea is correct, the wording is not. Resources have identifiers, not routes. Hyperlinks have link relations, not labels.
> won't it add too much overhead
What overhead?
Needless network requests to understand what a REST API is supposed to be able to do, and then navigate through their description, until the actual call can be finally be done.
And then even if it isn't pure REST, we have all those global warming contributions out of needless parsing.
Thankfully with the uptake of gRPC we are getting back to protocols where network performance is taken into consideration.
REST vs HATEOS vs HTTP API vs Web Service
It's not serious and it doesn't matter that much. If I'm writing an API, I probably want to give people or systems access to my data and services. Missing links will be a mild inconvenience when compared to things like bad naming, inconsistent data structures, confusing error codes or domain complexity.
[1]: https://en.wikipedia.org/wiki/Representational_state_transfe...
>A proper hypermedia client that receives this response does not know what a bank account is, what a balance is, etc. It simply knows how to render a hypermedia, HTML.
No true client would know how to display json or not know how to display html. So if you have a browser plugin that pretty prints json, it's RESTful? Seems pretty specious.
Only if its a hypermedia rendering (for a example it renders hypertext so the end-user can interact with the system).
People want and find useful what they do use in practice: the "opposite of REST" REST.
It's just that purists and/or Fielding overestimate the importance of the original REST.
This post is very pedantic. Being pedantic is not helpful.
The salesman say: the documentation is the API it's RESTfull!!!!!!!!
The developer hear: Don't care about user documentation it's RESTfull!!!!!!!
The client get: A shitty documentation and a JSON API
Hill in the battle that I intend to die on: "crypto" means "cryptography" dammit!
POST /api/findAllImagesMatchingMyPredicate
Body: jsonObject
And I work with those ppl. Time to look around..
I think OOP and REST are neck-and-neck on this one.
Now REST means JSON (or other data format) over HTTP with respect to the HTTP methods
1. Naming things is hard. Sometimes a thing gets a name for a reason that made sense a long time ago, but things evolve, and the original name no longer makes sense.
This isn't necessarily a problem. Nobody cares that we no longer typeset words and print images on paper, physically cut them out, and then physically paste them onto a board, which we take a picture of and use the picture to run the phototypesetter (https://en.wikipedia.org/wiki/Phototypesetting).
Yes, I am old enough to have worked on hybrid publishing system that used laser printers to create text that was physically copied and pasted in the manner described above. No, I don't argue that "cut" and "paste" are the wrong words to describe what happens in editing software.
So if we use the term "REST" today in a manner that doesn't agree with how the coiner of the term meant it when discussing the architecture of a distributed hypermedia system... Sure, why not, that's ok. We also don't use terms like "OOP" or "FP" precisely the way the terms were used when they were first coined, and for that matter, we probably don't all agree on exactly what they mean, but we agree enough that they're useful terms.
What else matters? Well...
2. Sometimes arguing about what the words used to mean is a proxy for arguing about the fact that what we consider good design has changed, and some people feel it may not be for the better.
That's always a valuable conversation to have. We sometimes do "throw out the baby with the bathwater," and drop ideas that had merit. We footgun ourselves for a while, and then somebody rediscovers the old idea.
The original OOP term was about hiding internal state, yes, and about co-locating state with the operations upon that state, yes, but it was also about message-passing, and for a long time we practiced OOP with method calling and not message-passing, and sure enough, we had to rediscover that idea in Erlang and then Elixir.
Forget whether the things we do with JSON should or shouldn't be called "REST-ful" because they aren't the same was what the word was intended to describe way back when. Good questions to ask are, "What did that original definition include that isn't present now?" "Woudl our designs be better if we behaved more like that original definition?" "What problems did the original definition address?" "Do we still have those problems, and if so, are they unsolved by our current practices?"
If there's something good that we've dropped, maybe we will get a lot of value out of figuring out what it is. And if we want to bring it back, it probably won't be by exactly replicating the original design, maybe we will develop entirely new ways to solve the old problems that match the technology we have today.
TL;DR
The question of whether an old term still applies or not can generate a lot of debate, but little of it is productive.
The question of whether an old design addressed problems that we no longer solve, but could solve if we recognized the problems and set about solving them with our current technology is always interesting.