HATEOS is a solution aimed at solving the clients can't update problem. This was never really an issue since browsers (closest thing we have to a HATEOS client) always rolled out new upgrades, it's an even less of an issue now with todays auto-updating browsers.
It was also born in the web's early document-centric era where every action requires a full loop-back to the server and entire page reload, which is why it will never be used to create engaging and interactive apps.
The ideal use-case for HATEOS would've been to power native mobile apps since they can be difficult to update, but even then it's relatively non-existent with most mobile app developers realizing it provides worse end-user UX that takes more effort to achieve.
Basically 10+ years on and it's still only being used in academic circles where the few that are trying to implement it, are doing it to achieve full-REST compliance, and not anything to do with choosing the best tool for the job or trying to create the best end-user UX.
I've written an earlier post why HATEOS and Custom Media Types are often poor choices, generally requiring more effort and producing less valuable results: http://www.servicestack.net/mythz_blog/?p=665
"[Hyperlinks are] a solution aimed at solving the clients can't update problem. This was never really an issue since browsers (closest thing we have to a [Hyperlink-aware] client) always rolled out new upgrades, it's an even less of an issue now with todays auto-updating browsers. It was also born in the web's early document-centric era where every action requires a full loop-back to the server and entire page reload, which is why it will never be used to create engaging and interactive apps."
Which seems rather nonsensical. HATEOAS is just about about hyperlinks driving the application. Which is what the Web is, mostly, still, even though we have lots of crappy Flash restaurant sites.
There's two ways you might be correct: (a) REST API clients are complete silos and we no longer use hyperlinks to bridge across them or (b) JavaScript is more important than links.
(a) is sometimes true given the state of today's REST APIs, but not always - Google's APIs, Facebook's APIs, etc. all are fairly connected. Plus We still use Hyperlinks to bridge across "apps", of course, because of this proliferation of API silos still require URIs and even the browser as glue between them until more mature user agents evolve.
As for (B), REST always had Code-on-Demand (i.e. JavaScript) as an optional constraint alongside HATEOAS.
The point of HATEAOS is not about building individual apps, it's about building an ecosystem of integrated applications... i.e. the HTML web itself.
One area where I do agree with you: REST's constraints can and should be jettisoned if it makes sense for your use case. If you really want to make an app that's "engaging and interactive" and needs to be a silo by nature, then have at it.
Much as a web crawler crawls HTML, an HTTP crawler should be able to crawl an HTTP API. There are operational benefits here of being able to write monitoring and instrumentation tools that don't need to be updated and redeployed when your URLs change.
If this goes for a crawler, it's not so hard to see how it might be the same for an admin panel or CMS.
Sometimes when writing front-end JS apps, I'll develop a common abstract class for a family of resources, and the only unique information in each concrete subclass is it's URL structure!
Sometimes your data really is that boring, and you could develop an entire UI with different resource types without ever having to encode expectations in client-side code.
And every time you click a hyperlink, you're requesting a full-loopback to the server and an entire page-reload. This limits it's ideal usages, e.g. it's better UX to use web sockets to live-update content, then get end-users to manually clicking hyperlinks. It's also not suitable for "mashups" (i.e. content/functionality from multiple sources) or stateful UI's (i.e. how most native desktop apps work).
And it's irrelevant in today's Single Page Apps, i.e. you're never going to be able to create a usable "Google Maps" or Google Docs-like applications that comply with HATEOS restrictions.
I'm not saying you can't build systems with it, just that it requires more effort to do and ultimately produces a worse UX - which is why it's an ignored technology/constraints.
I don't understand your arguments at all. Here's a concrete way in which a single page app may make use of it, without affecting UX at all:
Instead of hardcoding a URI hierarchy to determine where to issue a request to in order to take various actions on a message in a collection, look for link tags to specify them.
The immediate benefit is that it allows the server to signal 1) what actions it supports in the current context. E.g. if you're logged in as a user with restricted credentials, it might not return a link for the "delete everything" action, and when the rules change only the server side (which already needs the knowledge of the rules to validate requests) will need to change 2) endpoints can trivially change and the client side application is automatically up to date.
How much harder is this? You need to output a few extra tags and attributes. On the client you need to replace some hardcoded strings with lookups based on an xpath expression. But at the same time there are many cases where you may be able to remove duplicated logic.
The most successful sites balance JavaScript-driven UX with HATEOAS.
It's funny you mention Google Maps - its great success has largely been due to its rich linkability and embedability.
That's the point of HATEOAS - your site is one island in an ocean sized app called THE WEB.
There's a reason Twitter abandoned hash bang URIs and even adopted a traditional full page refresh design for the first load of the site - it was faster and more flexible than routing everything through a JavaScript wall.
There is a reason that big monolithic Flash sites never felt right. They were fortresses of isolation in a sea of interconnection.
Now, I'll completely admit the "linked data web" is a much more infantile version than the HTML web, but it holds plenty of opportunity for great UX that is completely separate from the HTML page refresh model while expanding interconnectedness of data and actions.
Web browsers aren't HATEOS clients, the web sites/applications they make accessible are.
I work with a team on a complex web service and agree with all of Steve K's points. A REST-like approach using appropriate HTTP verbs and a good resource design are (given the right use-case) a big win. The problem for me with HATEOS evangelists is that they seem to think it solves more problems than it actually does. Sure, we now have a layer of abstraction between logical and http link but for anything other than trivial changes you always have to have human readable documentation. HATEOS works brilliantly for the web because there is a human at other other end who understands what "new comment" is and how it's different from "new private message". My customers' API clients don't have this semantic understanding. Now I know that it helps dev's if they can navigate around my API but it's no substitute for well written documentation; HATEOS API's are something to strive for but not self-describing as some claim.
I agree that most clients don't have that sort of semantic understanding. that's because we are still at a stage where most of our APIs are just custom media types and a moderate improvement over yesteryear's WSDL. The improvement is the use of URIs and HTTP GET to mash data.
But the point of HATEOAS is that it doesn't have to be this way. The number of generic link relations are increasing, as are the numbers of useful generic media types. We are getting smarter, as an industry, with how to describe semantics in a machine readable way. It's a slow process because building generic designs is really hard. Today's HATEOAS advocates are trying to progress the state of the art, I can completely understand why a individual API author might ignore that constraint though.
Perhaps it's futile but I'm not certain our future 10+ years out is eternally building hard coded mashups.
> there is a human at other other end who understands what "new comment" is and how it's different from "new private message"
That's the great thing about media types and custom link relations, you can express the relationships and the formats in a way that a machine can understand, and add new functionality while needing less hardcoding. The thing this doesn't solve (in principle) is exposing new media types and relationships, that definitely requires updating on the client's end. But, it's a much better situation than "this url has a document with this format, I'll wget that url at the start of my program, parse this stuff out of it etc.."
IMHO HATEOAS seems like a solution looking for a problem. I've built and consumed lots of APIs and I've never encountered a solution where I wished that a mechanism existed to discover the next step automatically from the first endpoint.
Using HTTP verbs and resources is super useful and has caught on because its super simple. HATEOAS seems like a whole bunch of complexity to me that doesn't seem to add much value.
Maybe I'm wrong and just need to better understand the use case and value proposition.
The point of HATEOAS is to build software whose abstractions are stable across evolving server and client implementations. Software that spans decades, for example, such as HTML, in the face of many different browsers and web frameworks (from CGI through PHP to Ruby/Python/Node).
The client (browser, in most cases) is independent from the specific servers, it just knows the media types and link relations. The server has to bind to that media type (i.e. generate HTML).
This is a subtle thing for API developers as we mostly just have the HTML web as the big success story for HATEOAS, and haven't done this as much for machine to machine communication... But as an example, the Google crawler is an example of a non-browser that took advantage of the HATEOAS for browsers and built a new application around it that took advantage of the genericness of HTML. i.e. it didn't have to develop an API to each website, it just understood the difference between the links embedded in <A> tag, <IMG> tag and <FORM> tag, and can traverse the links that help it improve its search index.
One of the closer attempts to HATEOAS was Facebook and the Open Graph protocol, attempting to describe social data through link relations in a way that's completely independent of specific clients & server implementations.
The limiting factor to HATEOAS in practice for API developers is the lack of programming model & tooling for describing & linking API actions together on the Web, which is something actively being experimented over at RESTfest, the W3C REST workshops, etc. There's HAL for example, http://stateless.co/hal_specification.html
I sort of agree with you, HATEOAS seems to be the least useful thing of the whole bunch for system<->system APIs. However, when there's a human involved (like, when coding against that API), it can be quite nice.
LinkedIn, Google and Atlassian JIRA's REST APIs all correctly implement HATEOAS. It helps to make your application less brittle when calling a third party API: if they suddenly change how a particular URL is formed, you already know from parsing the HATEOAS.
The kind of daft thing is that they provide a separate reference with all those links anyways; they have to uprev the API (and docs) if they actually want to make one of these changes. So, they're doing the right thing, but taking all the good out of it.
I'm on the otherside of the fence. HATEOAS just seems obvious, as in why would someone do it any other way.
I recently wrote a client to spider legislation data published as WSDL. Were the reply/result documents HATEOAS, the spider would be trivial. But I have to basically do ETL between every reply and the next request (walking the children of a parent).
At the day job, some of the RESTful APIs (written by others) are mostly HATEOAS. It's a pretty big win. Clients are easier, inspection and verification is easier, non programmers almost kinda grok them, etc.
I don't think you're wrong. I can agree completely. But at the same time maybe we feel this way because we're not used to working with HATEOAS APIs. I personally haven't ever wished I could discover the next step while using an API either but maybe if those APIs were more common we'd be wondering how we ever lived without them.
Making a client add a level of indirection through <link> elements to figure out what URL to use is not inherently a good thing: it's easy for clients to get wrong, requires the use of XML rather than other output formats (sort of), and generally adds complexity without an obvious benefit. The example given was that the URL might change, and clients would be able to follow along like humans; but unlike humans, which can autonomously start using new functionality and new site organization, computers are very unlikely to be able to automatically provide a useful submission to a new API - and if the API doesn't change (or is backward compatible), there is no good reason to change the URL unless the URL is excessively coupled to the implementation. The only other benefit I can think of is that humans testing the API might use the list of available API calls, which is actually pretty reasonable, but the article seems to be articulating this as something more fundamental than a debugging aid.
You definitely aren't tied to XML, it is just one possible format for delivering hypermedia. I'm nothing close to an expert, but enjoy the concept. Github is doing cool things with hypermedia in their API.
> there is no good reason to change the URL unless the URL is excessively coupled to the implementation.
Here's one: It provides an easy means of making a system more resilient and scalable, by allowing the implementation to tell you "now go talk to this cluster over here" instead of using e.g. anycast (which is complicated) or still being dependent on your main load balancers.
Now, you can do that "wholesale" for the whole application. E.g. Yahoo Mail will redirect you to a specific mail cluster on login. But indirection on the resource level gives you more flexibility.
The URL might in other words be user specific, or even dependent on other factors such as which access rights the user have, or what service level they've paid for.
You can of course do that by encoding it in your client app. But then you will need to duplicate it, since you'll need to authorize the access on the server side for enforcement too, instead of encoding the information in some sort of policy on the server side that can both be used to generate the right links, and authorize access from a single source (of course you can do this to generate a custom client for each user login too, at the cost of blowing all your caching and CDN benefits out of the water).
The API (which, in a RESTful world, is just the media types) doesn't have to change, but the provider can. If you used standard media types, you could e.g. replace the comments in some application with say, Disqus without having to touch a client (that you often don't even control).
HATEOAS seems useless because we're still building primitive APIs and reinvent media types (read: custom, non-documented JSON documents) for no good reason, while we recreate the same functionality over and over again while improving only at the edges.
Frankly, I think it's a damn miracle that smart people convinced us of the benefits of the web, because we regular developers seem unable to see anything five centimeters past our noses.
We're working with a HATEOAS API on our current project. One area it really shines is in pagination. The API abstracts away both a MySQL and a CouchDB database. When paginating through SQL, the "next" and "prev" links are simply setting skip and limit parameters, but in Couch they're supplying keys and documents (because "skipping" 10,000 records in a b-tree is a really bad idea). The application no longer is exposed to implementation details; it simply follows the links.
Plus, when you can "surf" your API by following links in JSON documents, you start to twig how it's a pretty powerful thing. It takes client developers about a day to get over the strange feeling they get yielding state control to the back-end...
At work we had a team develop a corporate standard for REST APIs that includes discoverability, such as including a 'links' and 'actions' properties on resources.
Our team determined that implementing the discoverability aspect was going to take a significant amount of engineering resources and wouldn't provide much, if any.
The other problem I have with the idea of discoverability in the API is that it doesn't really help the client. For example, with a forum API there may be a link to create a new comment. The client has to be written to recognize the new comment link and attach the URI to a button. What if the API changes the new comment link? How does discoverability help here? Sure, the server is sending the new link but the client doesn't know that new link is really the new comment link that it's expecting so the client has to be updated anyway.
So I don't really see how that helps prevent the client from needing updates when the server changes something.
> Sure, the server is sending the new link but the client doesn't know that new link is really the new comment link that it's expecting so the client has to be updated anyway.
Then you're doing it wrong.
See the article example. E.g.:
<link rel = "/linkrels/entry/newcomment"
uri = "/entries/1337/comments" />
The "rel" attribute remains static. You don't change that. It is what tells your client that "this link is a new comment link".
The "uri" attribute can change at will.
The client knows that "/linkrels/entry/newcomment" means that this is the link to follow to post comments to this entry.
It does not need to be updated when the url's to server resources change, as long as the "rel" attributes stay the same.
I was referring to the rel attribute, not the uri. So your point is that once the server establishes a defined API that it can't change in order to maintain compatibility with the client, which is the need for API versioning. So the client still has to be updated when the API changes.
I suppose it provides a little flexibility for the API to adjust the endpoints, but in practice it still seems like a bad idea to modify the endpoints because clients could have cached data pointing to existing endpoints that you now break.
I really like his articles on HTTP APIs. I learned quite a lot from them. What he writes makes a lot of sense, except for the "HATEOAS" part.
What he consistently doesn't tell us is why it's necessary and how exactly we're supposed to implement HATEOAS—wouldn't HEAS be a much better acronym?—in our APIs.
I'm guessing the reason why is so API wrappers would be trivial to write. A single library could simply consume a "standards-compliant" API. What that means I do not know.
That's where the how comes in. It seems from the XML example that these links have to in the body of the response. That would mean every single response format has to be dealt with separately. That's not very practical. I also have no idea how something like it would look in JSON for example, let alone how it should be parsed in a meaningful way.
Besides killing ids, use full URIs, not just the path. That enables you to replace the links to other domains, possibly to a third-party. Just like we often link to Wikipedia on our posts, you could decide tomorrow that your service should link to dbpedia.org or Freebase instead.
I spent a long time trying to find a framework that does hateoas because I was desperate to do the right thing. I was even willing to use a different language for the api to make it happen.
Thankfully, during my hunt Tom Christie released the django rest framework 2. It does this stuff the right way. All 4 levels of goodness.
From the outside it might not be obvious why it's worth having hateoas in your api but when you're exploring an api as a human it makes it so much easier.
The client tools are not really there yet to take advantage of it, but they will be, and our systems will be more stable for it.
The concept of discoverable resources is something I'm very passionate about, we've developed client libraries for our API in JavaScript and .NET that start from a single document, and discover other resources via links. I wrote a blog post about it a while ago here: REST, API’s and The Missing Link - http://blog.appsecute.com/?p=98
HATEOAS seems great if you look at it on the design perspective. You can navigate the whole API just by having the entry point. Seems great right.
The HATEOAS has been introduced to solve "discoverability" of API.
But in practice this don't really work. This would mean a client can't bookmark a link to a resource, but instead he would need to navigate the whole "state machine" to discover the hypertext of the said resource.
This has two problem on my perspective:
- It don't look like the WEB. It is like if each time i need to find a thread on hackernews, in need to go from the frontpage and navigate until i find the thread. Why not just bookmark the url of the post and be done with it.
- This HATEOAS just translate the issue: Instead of hardcoding the links, I will be hardcoding the Relations between resources.
So if the goal of this HATEOAS is to allow one day to use a single client API that will be able to work with any REST API, Rest assured we're not quite there yet. (pun intended)
You clearly come from a rails background. I would even go as far to say that "many design from the UI backwards" is extremely true in the Rails crowd (I would know).
This level of thinking create monolithic, poorly organized, unstable applications.
I think it's easy to miss the benefits of HATEOAS. The relative ease with which javascript clients can use so-called level 3 REST to construct resource URLs from IDs is compelling.
We've been using a HATEOAS approach for the interface between our back-end rails app and our front-end javascript rich client.
We've seen significant benefits during development because the hypermedia links allow us to DRY up the coupling between client and server.
The overall result is much less boiler plate duplication and much clearer client code. It's easier to maintain and extend.
We've judiciously identified some exceptions where the client needs to construct URLs explicitly, in our case because the URLs are tied to dates and we think it's silly to have the server construct links for next_day and previous_day let alone an entire calendar view.
A large part of the reason that many APIs fail at content negotiation has to do with the fact that many clients don't support it properly. Chrome in particular completely ignores the Vary header (and its devs have marked the issue as "won't fix"), so it's not practical to have the same URL respond with different media types based on the Accepts header (it overwrites the cached item, so the back button doesn't work properly). Many APIs compensate for this by putting the format in a query parameter.
If HATEOAS implementation was more widespread the biggest advantage would be the ability to create standard libraries for API calls rather than thin wrappers for every language. Your API would also be more or less self-documented, hopefully making it easier to navigate for humans and machines alike.
I am curious as to why this can't be handled at a HTTP level with OPTIONS? Could I not OPTIONS query a URI, and receive a list of supported HTTP methods (GET, POST, PUT, etc) that are available on this resource?
This site (“The timeless repository”) looks interesting but the “Recent Changes” page [1] does not appear to be something I can subscribe to in a feed reader. When I try, I instead get the main feed [2] which shows no updates since May of last year.
There's two massively under-addressed problems with the HATEOAS constraint:
1. Code-on-demand requires coupling of technology stack of client and server. (If you're going to assume your clients can run javascript or x-technology, where is the decoupling?)
2. Out-of-band communcation. Some of the recommendations I've heard have been along the lines of: let's avoid out-of-band communication in API docs etc and have it completely discoverable based on the media type. "And what happens if the media-type isn't sufficient to represent your use case?"
"That's easy, you create your own media type"
So in order to have no need for API docs for an actual API, I create a completely separate media type (which will need its own RFC type specification, or at the very least its own API docs).
This is purely academic, and I still think that nobody is doing it because only those that have tried it or seriously thought through the implications are naysayers. Nobody who's actually built a system like this is coming forth and explaining the benefits.
Maybe if we get some media type that everyone starts using because (unlike XML/JSON etc) it supports full REST-like semantics i.e. forms for POSTing and not just hyperlinks for GET requests, then we may see the purported benefits.
But you still run into the problem of expressing field level validation messages in a 422 response. And expressing the valid values in a reasonable way in a template for an object.
Yes, expressing a String or a DateTime object or any other relatively primitive data type with simple validations: presence of, within array of, valid email, valid phone number may be possible.
But most of the time you can't predetermine this kind of contract up front. So there's no way of me, the producer of the API, expressing to you, the consumer of the API, that this particular regex completely satisfies my business requirements in all circumstances.
If I could nail down the expression of the required values for an API call in any sufficiently complex business system and convey it to you in some easy to digest, computer-understandable format then I'm sure we'd be gold.
But reality sets in and you realise you've added a whole bunch of cruft to every API call, and a whole bunch of unnecessary API calls to express some theoretical idea by an academic who admits that he's too busy to actually go away and start building some of these systems.
Not only that, you're making it harder for your consumers to integrate. (Unless they are using an already known media-type, which is a chicken and egg situation as the closest we have to being good enough is HTML, and you try explaining to API consumers, yes the future is HTML, and yes I want you to parse HTML to get the data from my API).
We could start saying, well hell to efficiency as technology consistently improves, but we're dealing with routing packets over the internet at approximately the speed of light. There is a real-world upper limit to this stuff and we need to improve efficiency because if you add 10 extra HTTP requests per API call, your users _will_ notice. This stuff matters.
Hell, even if these are not user critical applications and there is some as yet not understood benefit to the world as a whole having asynchronous processes running in the backends of our systems optimising some problem space and searching for solutions on their own, then we still have another real world problem. There has to be some short to medium to even long-term benefit to building this kind of system to someone.
Nobody wants to build this theoretical hypermedia, semantic web on their own dollar when there is no visible pay-off in the near future or even idealistic theoretical pay-off in the long term.
It was also born in the web's early document-centric era where every action requires a full loop-back to the server and entire page reload, which is why it will never be used to create engaging and interactive apps.
The ideal use-case for HATEOS would've been to power native mobile apps since they can be difficult to update, but even then it's relatively non-existent with most mobile app developers realizing it provides worse end-user UX that takes more effort to achieve.
Basically 10+ years on and it's still only being used in academic circles where the few that are trying to implement it, are doing it to achieve full-REST compliance, and not anything to do with choosing the best tool for the job or trying to create the best end-user UX.
I've written an earlier post why HATEOS and Custom Media Types are often poor choices, generally requiring more effort and producing less valuable results: http://www.servicestack.net/mythz_blog/?p=665