More seriously, though, I think a lot of the reason for the historical tendency towards server side rendering has been:
* Clients were slow: remember, until V8 raised the standard for speed, Javascript VMs were slow. So doing a lot of rendering or data processing in Javascript was a bad idea, since it negatively affected the user's experience with their entire computer, not just your site.
* DOM Manipulation is hard and inconsistent between browsers: Yes, JQuery and friends hide the native DOM API. But we have to remember that it wasn't always the case. Rich client-side libraries are all quite young, comparitively.
* Offline capabilites were non-existent: Even today, the HTML5 specification, as implemented, only grants you a fraction of the data storage and retrieval capabilities available to even the most sandboxed application. And previously, these capabilities simply weren't there - if your app needed to store data, it had to store data server side and tag the client with a cookie or some other for identification to keep track of which data belonged to whom.
Yes, there is progress being made on all three fronts, but you can't expect developers to throw away years of best practices in a day, then immediately turn around and write HTML5 webapps. I think the evolution towards a more balanced computing model (where more of the computational load is being handled by the client) is ongoing, and will accelerate as browsers become more capable.
On a related note, the last couple of weeks I've been working on rewriting an old (2000-era) web app, originally written in Visual Interdev. Visual Interdev was widely derided back then, it was something only Visual Basic 'programmers' (the chaff of the programming community) would touch. Turns out that many things it did are a lot more popular nowadays - client side calculations, validation, dynamically updating the UI, etc. Of course there was no XMLHttpRequest, so by modern standards it was quite limited; still, it's funny how something so derided back then just turned out to be ahead of its time.
These were the days. Funny how some things change and "silly ideas" are all the rage.
What we are really discovering here is that we can do software that uses the (big) network.
The opportunity is in that, differently from the past, this time we suck less at the UX design.
Actually, it was Apple, not Google, that kicked off the JS performance wars. Apple and Mozilla were deep into JIT tech before anyone knew about the existence of Chrome.
As others have pointed out, the major problem I face is to inform the users that some features are not available when they are offline (e.g. file uploads). If you're not careful with the user experience, this leads to lots of support headache.
Another problem is dealing with legacy client data. If you are storing anything at all in the local storage, you need to realize that it's going to be always there once you write it. So handling with data migrations on the client storage becomes very important, and definitely must be thought ahead.
cache https://webcache.googleusercontent.com/search?q=cache:http:/...
btw a worthwhile read http://www.alistapart.com/articles/application-cache-is-a-do...
That being said, if you're already making a mobile app or API to go with your site, it does make sense to decouple the frontend (HTML/Cocoa/whatever) from the backend so you only write the backend once. Best way to have an up-to-date, useful API is to use it yourself.
I'd argue the development is no more difficult either, in fact it lends itself to a style that is easier to unit test.
The harder part seems to be communicating with the user. If/how to let them know they've gone offline, and what that means for their experience, and if they're allowed to create/update content, what will happen when they reconnect.
For example, the next question for me is what to do with conflicting updates, ie when a user updates a document offline and reconnects, and the document has been updated by a different client while they were offline. Discarding or merging isn't a problem from a technical stand point, the problem is presenting it to the user, and striking a balance between doing what the user wants/expects, and not bothering the user with a ton of questions about which of their changes they want to keep/merge.
The problem with this method is ideally you'd like the client to know it's reconnected as soon as possible, and start working through the queue. I figured the most foolproof way was just periodically trying to talk to the server when you think you're offline. It'd probably be better if you had a solid method of receiving actual network related events like disconnecting and reconnecting, but the lazy 'just keep checking' method is workable.
The bonus of it is it works the same if it's a problem on the servers end.
Take THE white elephant example; Facebook mobile. The shift is evident, my girlfriend doesn't use a traditional desktop/laptop anymore she consumes all her Facebook glutton through her mobile. If it takes more than 5 seconds to load she gives up.
There is a grey gap in knowledge of accepted and logical practice with what stays online or offline but for example the whole of Facebook's UI could be cached on a mobile and just reload the updated news/feeds.