If you use a decent router, you get shareable idempotent URLs: https://solvers.io/projects/7GTeCKo7rGx5FsGkB
> Document formats that anything (including dumb crawlers and future gadgets) can parse and reuse in unexpected ways
As in the article, you can use phantomjs to serve up static HTML to crawlers. They are correct in that it does slow you down and add complexity.
The main problem I think is that SPA tech is still immature and getting all the moving parts to build a public facing SPA working together is a time sink.
One of the things I love about the web is that it uses incredibly simple building blocks like simple html pages at defined stateless URLs, dumb servers and dumber clients, and builds complex systems through their interaction. I'd be very wary of solutions that drop those advantages.
There are certainly technical solutions possibly to almost any problem with angular or client-side apps in general, but I'm not sure that rendering everything client-side really gives you enough advantages to warrant it for many websites. What do you see as the main advantages to this approach and do you see it spreading everywhere eventually?
Every website is different and what suits (say) a chat application will not suit a document-orientated website at all. There's certainly room to explore both approaches or even mix them at times.
I'm also not sure that rendering everything client-side is advantageous enough to warrant its current popularity (hype...), but I do see some advantages. Firstly, I think it is a better separation of concerns - the server is in charge of data discovery, persistence, consistency, and aggregation, while the client is in charge of determining how that data can be most useful in the current context. In practice, this means it is possible to have different front-ends for the same back-end. Admittedly, that is certainly not always a necessary or desired feature. The separation also makes it easier to build the front end and back end of an application separately from one another, and possibly even in whichever order you prefer. That can be a good thing, though I don't think it's really taken advantage of very often. I also think that true web applications can be made to feel much snappier and closer to native. The line between what should and shouldn't be considered an "application" is unfortunately blurry (the Blogger example is a good one).
This is an interesting point - if you are representing numeric data like chart datapoints, a representation like json might make it cleaner and more reusable by other services or clients. Of course much data is actually formatted documents or snippets of text, in which case json is not such a good fit and html is perhaps better. In many ways html is a worse is better solution, but that is probably part of its strength - it is very easy to get started with and munge to extract or insert data.
I'm not sure a separate of concerns between server and client is necessary and helpful to all apps, though I'm sure in some cases it is useful (for example serving the same json or xml to a mobile app, a client-side app and some desktop client, or separate teams working on both), but then a server-based solution can easily output both formatted html for traditional clients (web browsers, which are now and in the future ubiquitous), and a json or xml representation for other clients - this sort of separation of concerns between data and presentation is not really exclusive to client-side solutions.
I suspect the concept of native APIs (desktop or mobile) will eventually disappear, but it'll be an interesting journey if we ever do reach that point and would take decades.
Photoshop wouldn't work as a website, and HN wouldn't work as a program.
Different forms for different use cases.
Turns out I wasn't using Iron Router properly. My bad.
It is a single page application if you don't make your browser reload the page from the server each time you navigate within your app. URLs here are implemented using HTML5 pushState -- the browser isn't loading or refreshing the page when the URL changes, except for the first page load.
My point is you get the best of both worlds there: static, representative URLs that live forever (as they should); and the responsiveness you get when you only need to load data and move some divs around instead of reloading everything from scratch each time.
In fact Meteor takes things even further with latency compensation: it predicts how things will change as you interact with the app and does eventual consistency with the server state. This makes updates/deletes feel even faster.
But yeah, it's a trade off. And right now it's a big trade off -- my productivity has dropped in some places, compared to writing a simple app in express or Rails.