- SSR is always slower than static sites
- SSR is often slower than CSR - especially when using a small and fast framework like Solid, Svelte or Vue3
- When rendering on the edge and using a central API (e.g. for the DB) SSR is always slower when interacting with the site than CSR because of the extra hop from your central API to the edge to the browser instead of from the central API directly to the browser
- SSR is always more complex and therefore more fragile, however this complexity is handled by the frameworks and the cloud providers
- SSR is always more expensive - especially at scale.
- Traditional SSR with html templates will scale your site much better, simply because traditional languages like Go or Java or C# are scaling much better than NodeJS rendering the site via JS
We owe the technology of the "new" SSR and genius stuff like islands many very smart and passionate people.
Overall, this article not balanced at all. It is pointing out only some potential benefits. It simply is a marketing post for their product.
I don't think this is _always_ true. With SSR you only render the HTML server-side on the first page load. Any interaction after that is just the JSON payload being retrieved and rendered by a JS framework, which is faster than a complete rerender.
Your comparison is valid for first page loads. But usually people visit multiple pages on a website. And then SSR usually wins, though I agree the added complexity is usually not worth it for 100% static websites.
The remaining 10%, where I stay 90% of the time, could benefit from a mostly static interface with a tiny amount of server-side rendering.
I’m really hoping it gains some momentum because I’d love to use it in some client projects.
It's claimed to be fast for "initial page load".
Remains to be seen if that's worth it for everything and if it remains as fast on subsequent navigations.
If third party scripts are the bottleneck, that probably has a marginal effect at best.
Partytown looks much more interesting to me.
For something that could be a single point of failure, like Cloudflare workers was according to this post:
https://news.ycombinator.com/item?id=34639212
Edge workers are cool and all but if you don't need them, why add them as an intermediary / source of lock-in?
I really hope it or something like it becomes popular long term.
As an example, you may have an HTML table on your page that you want to insert a new row for some reason on let's say a button click. You place some attributes that htmx understands on your button that will call for the TR HTML chunk from the server. You can imagine replacing all the rows for a paging click etc.
Again check out the example for cool stuff.
It's not suitable to everything, but it works really well. I'm not advocating switching to it right now, but it's looking very promising.
For me the biggest advantage is eliminating the need to learn, debug, and maintain components on an additional frontend framework (Angular/React/Vue).
I just built a rough toy project [0] that was my first time with FastAPI and HTMX and it was fun and fast.
[0] https://www.truebuy.com/ (like rotten tomatoes for product reviews but just for TVs right now)
You can also just send the whole page and use other features to select just the part that you want to update (obviously that has a cost of sending the whole page though).
The computation for rendering (in every case I've seen, and I have to speculate in 80% of cases ever) is so trivial compared to the actual retrieval of the data to be rendered.
More seriously, though, it's nice to be able to just build without thinking too hard about if you're getting your abstractions perfect. To me, this is the main advantage of SSR - moving fast doesn't leave behind a wake of idiosyncratic APIs that need to be (carefully, dangerously) cleaned up later.
You still need client-server communication, so you still have an API, it's just an ad hoc API that speaks HTML and form data instead of JSON. And because you didn't think of it as an API while you were building it, it actually tends to be harder to clean up later, not easier.
Just pass a header for request content type json. Then the server returns data in json format as opposed to html.
Aside from all other implications, letting each client render the same stuff is a massive waste of energy and compute.
Either way, the measurement of Joules/page is likely to be such an astronomically small number compared to the constant cost of simply having a server at all IMO.
An alternative approach is to retain the statelessness of the first option they outline (I don't understand why it isn't "true" SSR): use normal, server-rendered HTML, but improve the experience by using htmx (which I made) so that you don't need to do a full page refresh.
This keeps things simple and stateless, so no server side replication of UI state, but improves the user experience. From what I understand of the isomorphic solution, it appears much simpler. And, since your server side doesn't need to be isomorphic, you can use whatever language you'd like to produce the HTML.
import "client.js" as client
on request:
document = new ServerDOM(),
client.render(document, data),
respond with document.toHtmlString()
> I don't understand why it isn't "true" SSR
This article seems to be using the term SSR exclusively in the frontend framework sense, where client code is run on the server. It's not how I use the term but it is a common usage.Another possible reason that the htmx approach isn't discussed: the any-server-you-want nature of htmx is terrible for selling Deno Deploy :]
I do know there are folks using htmx and deno (we have a channel on our discord) so I don't want to come across as oppositional! Rather, I just want to say that "normal" SSR (just creating HTML) can also be used in a richer manner than the plain HTML example given.
I'm working on a server side framework and someone told me it reminded them of Java Server Faces. I think the approach works really well and latency is low enough when you can deploy apps all over the world. Also they didn't have HTTP2 or websockets back then... What I'm doing is basically a clone of Preact, but server side Ruby, streaming DOM-patches to the browser...
In my experience, it has been a horrible technology (even when combined with PrimeFaces) for complex functionality.
When you have a page that has a bunch of tabs, which have tables with custom action buttons, row editing, row expansion, as well as composite components, modal dialogs with other tables inside of those, various dropdowns or autocomplete components and so on, it will break in new ways all the time.
Sometimes the wrong row will be selected, even if you give every element a unique ID, sometimes updating a single table row after AJAX will be nigh impossible, other times the back end methods will be called with the wrong parameters, sometimes your composite components will act in weird ways (such as using the button to close a modal dialog doing nothing).
When used on something simple, it's an okay choice, but enterprise codebases that have been developed for years (not even a decade) across multiple versions will rot faster than just having a RESTful API and some separate SPA (that can be thrown out and rewritten altogether, if need be).
Another option in the space is Vaadin which feels okay, but has its own problems: https://vaadin.com/
Of course, my experiences are subjective and my own.
If you can reasonably cache the response, SSR wins on first page load, no question. On the first page dynamic render "it depends", can be SPA or SSR. 2nd page render a well built SPA just wins.
"it depends....." Server CPU cores are slower than consumer cores of similar eras. They run in energy efficient bands because of data center power concerns. They are long lived and therefore are often old. They are often segmented and on shared infrastructure. And if the server is experiencing load, seldom an issue on the client's system, you have that to deal with also. Your latency for generating said page can easily be multi-second. As I've experienced on many a dynamic site.
Using the client's system as a rendering system can reduce your overall cloud compute requirements allowing you to scale more easily and cheaply. The user's system can be made more responsive by not shipping any additional full page markup for a navigation and minimizing latency by avoiding network calls where reasonable.
On dynamic pages, do you compress on the fly? This can increase latency for the response. If not, page weight suffers compared to static compressed assets such as a JS ball that can be highly compressed well ahead of time at brotli -11. I never brotli -11 in flight compression. Brotli -0 and gzip -1.
This is for well built systems. Crap SPAs will be crap, just as crap SSR will similarly be crap. I think crap SPAs smell worse to most - so there's that.
> Compatibility is higher with server-side rendering because, again, the HTML is generated on the server, so it is not dependent on the end browser.
If you use features the end client doesn't support, regardless of where you generate the markup, then it won't work. Both servers and clients can be very feature aware. caniuse is your friend. This is not a rule you can generalize.
> Complexity is lower because the server does most of the work of generating the HTML so can often be implemented with a simpler and smaller codebase.
Meh. Debatable. What's hard is mixing the two. Where is your state and how do you manage it?
If you're primarily a backend engineer the backend will feel more natural. If you're primarily a front end engineer the SPA will feel more natural.
If there's any secret info being transmitted (ie, session tokens) and you're using TLS, you shouldn't be using compression.
I remember more bugs than I'd care to recount with the back button and scope and that's not even talking about having to simultaneously think in JavaScript and your server side rendering language of choice.
I also think there is a lot of room for multiple choices. For web applications, I think server side rendering as a default is a poor choice. For information conveyance, I think server side, or even pure static sites, makes a lot of sense.
What problem are you trying to solve?
People are worrying about the speed of SSR when they should be worrying about the developer time on the client which is several orders of magnitude more.
I think people have fallen in love so much with complex Javascript frameworks that they’ve forgotten how easy it is to get to an MVP with SSR.
Speed is important.
Speed of development is even more important for businesses in this era who have to get to revenue faster.
And that’s why things like Phoenix LiveView and its counterparts in other languages is catching on so quickly.
People are getting fatigued with the latest flavor of the month JS framework.
But what do I know… I’m just a lowly “developer” working for crumbs. Never even finished a CS degree. Sigh.
That said, client-side rendering is strictly more general than server-side rendering. So I prefer to use client-side rendering everywhere so that I don't have to switch between two different modalities and maintain two sets of tooling (or worse switch in the middle of a project!) I gather this is against the current fashion but whatever.
I absolutely love Django and old-style web frameworks, but they are not without their own complexity risks.
That said... I'm not going to pretend it's an urgent need and will wait for these tools to mature.
{"comment": {"body: "lol"}}
notably expensive than <Comment body="lol"/>Edit:
Thinking of having only to serve a state once and having each action processed on the client side instead of making for each a call the backend which has to return a fully rendered page.
Maybe I have a misconception goin on here!
How much is the additional compute cost? $10 a month? $1000 a month?
How much more productive are your developers? Does it offset the cost?
I've been using sveltekit for years and still struggle with it.
With sveltekit, I'm never really sure when to use prerender. I'm never sure how and where my code will run if I switch to another adapter.
With pure svelte, my most ergonomic way of working is using a database like pocketbase or hasura 100% client side with my JavaScript, so the server is a static web server. It's got real time subscriptions, graphql so my client code resembles the shape of my server side data, and a great authentication story that isn't confusing middleware.
I'm sure SSR is better for performance, but it always seems to require the use of tricky code that never works like I expect it to.
Am I missing something?
For things like blogs, server-side HTML with a sprinkle of client-side Javascript (or WASM) makes a lot of sense.
But for applications, where you're doing, you know, work and stuff, in-browser HTML makes a lot more sense.
The thing is, as a developer, most of the work is in applications. (It's not like we need to keep writing new blog engines all the time.) Thus, even though most actual usage of a browser might be server-side HTML, most of our development time will be spent in in-browser HTML.
> Compatibility is higher with server-side rendering because, again, the HTML is generated on the server, so it is not dependent on the end browser.
Excuse my bluntness, but this is complete nonsense. Browser incompatibility in 2023 is mostly limited, in my experience, to 1) dark corners of CSS behavior, and 2) newer, high-power features like WebRTC. #1 is going to be the same regardless of where your HTML is rendered, and if you're using #2, server-side rendering probably isn't an option for what you're trying to do anyway. I can confidently say browser compatibility has roughly zero effect on core app logic or HTML generation today.
> Complexity is lower because the server does most of the work of generating the HTML so can often be implemented with a simpler and smaller codebase.
This, again, is totally hand-wavy and mostly nonsensical. It's entirely dependent on what kind of app, what kind of features/logic it has, etc. Server-rendering certain apps can definitely be simpler than client-rendering them! And the opposite can just as easily be true.
> Performance is higher with the server because the HTML is already generated and ready to be displayed when the page is loaded.
This is only partly true, and it's really the only partly-valid point. Modern statically-rendered front-ends will show you the initial content very quickly, and then will update quickly as you navigate, but there is a JS loading + hydration delay between seeing the landing page content and being able to interact with it at the beginning. You certainly don't need "a desktop...with a wired internet connection" for that part of the experience to be good, but I'm sure it's less than ideal for people with limited bandwidth. It's something that can be optimized and minimized in various ways (splitting code to make the landing page bundle smaller, reducing the number of nested components that need to be hydrated, etc), but it's a recurring challenge for sure.
The tech being demonstrated here is interesting, but I wish they'd let it stand on its own instead of trying to make sweeping statements about the next "tock" of the web trend. As the senior dev trope goes, the answer to nearly everything is "it depends". It shows immaturity or bias to proclaim that the future is a single thing.
There are some pretty crappy bloated client-side apps but when it's done well and it is appropriate for the app in question, it's amazing.
I've been playing novelai.net text generation and I think their app is mostly client-side. It's one of the most responsive and fast UIs I've seen.
Also, the article has this sentence: "Performant frameworks that care about user experience will send exactly what's needed to the client, and nothing more. " Ironically, a mostly client-side app that's only loaded once, cached, and is careful about when to request something from the server, might be more bandwidth friendly than a mostly server-side app.
The early straw man was that downloading apps was too daunting a task for users and yet some how they managed to download and update email clients, word processors, iTunes and ironically browsers themselves.
Since I began my career in 1995 I’ve seen application architecture pundits proclaim the correct way to develop applications go from thick client native to thin client native to thin client web to thick client web back to thick client native (iOS & Android) and now, according to the article back to thin client web. I’ll submit the best model is thick client native using the “web” as a communication backbone for networked features.
The first example shows the server rendering a handlebars template and then sending that as a response to the client -- it's then stated that this "isn't true SSR"
Then the same thing is done without a template language, using strings instead, and this is some different kind of SSR altogether and the "true SSR".
Which also seems to insinuate that only JS/TS are capable of SSR?
Server-side rendering! Well, kinda. While it is rendered on the server, this is non-interactive.
This client.js file is available to both the server and the client — it is the isomorphic JavaScript we need for true SSR. We’re using the render function within the server to render the HTML initially, but then we're also using render within the client to render updates.There were a number of products that allowed a web app to maintain a 3270 connection to the mainframe and render the terminal screens as an HTML form. Fascinating stuff.
When you start using htmx, you raise your eyebrows and think - hmmm this could be something interesting. When you use it for many months, you then open your eyes very wide and think - this is something special! In hindsight is so damn obvious, why didn't it happen much earlier?!?!
Therefore, they are not solving all the problems of client-server + best UX constraints. Basically the problems we have all this time comes from:
1) There's a long physical distance between client and server
2) Resource and its authorization have to be on server.
2) There's the need for fast interaction so some copy of data and optimistic logic need to be on client.
The "isomorphic" reusable code doesn't solve [latency + chatty + consistent data] VS [fast interaction + bloat client + inconsistent data] trade-off. At this point I don't know why they think that is innovation.You may get some nominal gains from sending less JS or having the server render the html, but IME the vast majority of apps have much bigger wins to be had further down the stack.
"A script on this page may be busy, or it may have stopped responding. You can stop the script now, or you can continue to see if the script will complete." - would like to have a word with you...
Edit: my "most" claim is probably too strong on reflection: while there's still a lot of work to do to convert an in-memory DOM into pixels, it's likely to be highly optimized code (some of it handled at the GPU level) that uses minimal compute cycles. And while the V8 engine may be similarly optimised, it still has to interpret and execute arbitrary JS code supplied to it, plus handle all the necessary sandboxing. It'd be interesting to get a breakdown of what compute cycles are used converting a typical SPA into pixels, and of course a comparison with how much time is spend waiting for data to come across the network.
Like most things, there is no simple right answer, and it depends on what you are doing. But blindly assuming the experience will be worse using CSR is as silly as assuming SSR will always be worse as well.
Does it though? Loading a webpage barely registers in cpu usage etc on a reasonably modern device
How will you even know without looking at the source or blocking JS across the web? Like, sure, if they've fancy animations across all elements from the moment you open the page should be obvious. But what about something like https://rhodey.org/? It opens instantaneously in my ancient laptop connected to a terrible internet line. Check source. Only a single empty div in body. Everything is rendered with JS.
Never mind that i don't know how you would display images server side.Your client needs to decode that image and render it to screen at some point
Seriously, how did we get there? Having dealt with jsp, jsf (myfaces, trinidad, adf...), asp.net, asp mvc, angular, plain html/css/js, how is is possible for FE web dev to be such a mess? So much complexity, for what? How many have to deal with millions of visit per day? Or even month? It seems to me history is quickly forgotten and new generations know very little about the past.
Keep it simple, please.
[1]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
I could build some SSR apps on my own but like, the real hard stuff about development comes >1 year in when you start running into those deep complexity issues. Can't really simulate that in a tiny pet project.
I really like client side apps. They are so much more responsive. The only problem is with bundle sizes.
Privacy issues iirc. I think you can use the setup to introduce tracking/reveal information about your history.
Of all the new ways of thinking, Remix is the leader in not promoting a specific paid delivery platform. So in that sense I can see why people might want to mitigate its advantages by trying to tie it to React.
(having said that, Shopify might tie it down more, but I see no evidence so far)
Do you mean frontend-only devs losing their jobs? That's also unlikely since web dev went full stack over a decade ago. People don't build SPAs just because they don't know how else to build a web app. If you have specific examples of crappy SPAs you should blame that shop for sucking, not the concept.
It's the same reason Java, with all its boilerplate, is so widely used.
(Full disclosure: am author)
Google argues that it is able to handle javascript heavy client side code in it's crawlers, but the data seems to show otherwise.
It’s mentioned in other threads that SSR is more expensive as your scale - so you might as well make the “outside” layer of your site light weight and static/SSR for fast client loading and then give them the full SPA once they’ve clicked through your landing pages.
Please note that this project is deprecated. Dynamic rendering is not a recommended approach and there are better approaches to rendering on the web.
Rendertron will not be actively maintained at this point.
If we are comparing complex SPAs then it's not really relevant
I'm pretty sure it relied on in-browser XSLT to do a lot of its magic.
As to what harm I'm avoiding, it's mostly around tracking -- which is something that browsers have a very difficult time preventing, especially if sites are allowed to run code in them.
But all that code is necessary to make our sites work the way we want.
It's necessary to make shitty websites that are impossible to load on slow/flaky connections.I don't see why we should assume the server is faster at processing the input data into HTML than the client is. It could very easily be that the client device does this faster. SSR additionally prevents progressive rendering, since you must generat eall the HTML ahead of time, which can make pages feel slower. Also HTML+JS data size can be larger than data+JS size (and you /may/ need the data anyway for the SSR version to do hydration). Of course all this varies, which is why it's silly to claim a general principle.
But realistically the amount of data is likely small for most applications and it's probably not the bottleneck.
A compiler
A server-side HTTP handler
A server framework
A browser framework
You can actually use Remix as just a server-side framework without using any browser JavaScript at all.
Yes, but, on the other hand, is it?
I really hope some of the heavy front-end frameworks die a death, some common sense prevails, and we get a lighter, faster loading, more responsive web. I can dream.
And then if you want to take that rendered data and do anything interactive with it you have some js soup of parseInt(document.getQuerySelector(".item > .item__quantity") all over the place. HN has some weird hate for this new server side rendering, when it's really the smart thing to do and equivalent to what any app is doing: the "frame" of the app is downloaded once (and we can send the initial data with it), and then it can become interactive from there. e.g. if the data needs to be reloaded we can make a small JSON request instead of reloading the whole page and re-rendering it.
Nothing stops a dev from providing both a server-side render and an API endpoint, for those that don't want the JS soup. In fact, such a design is not uncommon, and it's fairly straightforward to write a backend interface that both the server-side rendered endpoint handler and the API endpoint handler can use.
> HN has some weird hate for this new server side rendering, when it's really the smart thing to do and equivalent to what any app is doing: the "frame" of the app is downloaded once (and we can send the initial data with it), and then it can become interactive from there. e.g. if the data needs to be reloaded we can make a small JSON request instead of reloading the whole page and re-rendering it.
The "smart" thing to do depends on what your requirements are. For minimal latency, server-side rendering tends to fare much better, as it requires only one round trip to fetch all the necessary information to render the page contents.
And that's what most of the web needs - few use cases require having to manipulate every bit of the dom to send constant updates to the end-user. Social networks, financial sites, banks, betting sites etc. The rest do not need these heavy frameworks and the extensive dom manipulating capability. The last thing you want in an ecommerce checkout process is to distract the user by manipulating the dom to give him 'updates'. So nobody does anything like updating the user with info like 'latest prices', 'your friend just bought this' etc right in the middle of the checkout process. Same goes for blogs, most of publishing.
Developers just shouldn't write that kind of fragile code by hand. But there's nothing wrong at all with the code being there.
Fast-loading, complete, cacheable, archivable pages.
And DOM changes for updating them without reloading the entire page.
Apache used to be a good server side renderer too but those were the old days.
Yes but a plain html file is static, so that's not going to work unless your site is purely static (i.e. a blog).
Static pages are easier on Mother Earth too.
Rendering basically means, to take data & logic and transform it into a view for another system (or person).
Graphical rendering is probably the needed operative word for this point? A bit of annoying semantics but I think rendering just means to provide a structured view for some state.
Those heavyweight framework exist for a reason, they're not born out of thin air. It's about your use case, you don't need it for other's use case.
There is no truly perfect scheme, only ways in which we think we can improve on the status quo by swinging the pendulum back and forth.
This server-client zeal to improve has been tremendously productive of good ideas over the last few decades. It will continue. Hopefully saving power and CO2 can be the focus of the next couple of turns of the great wheel.
I wouldn't be shocked if we sooner or later saw language-level support (think of something like Elm, improved) for writing "just" code and then later marking up which parts execute where, and the communications and state synchronization crud and compiling down to the native language is just handled.
It's funny Google can't index a SPA, given the tie to Angular (2500 apps in use in-house). Wouldn't be so hard to build something that could.
If I gave this as an example, people would say I'm being unfair to the front-end folks. But since Deno posted it, I think it's fair say that it's overkill to use a front-end framework like React (mentioned as a comparator in TFA) to implement add to cart functionality on an e-commerce site. And that for users with slow browsers, slow/spotty Internet, etc., an architecture that uses a heavy front-end framework produces a worse overall experience than what most e-commerce sites were able to do in 1999.
Edit: IMHO all of this is an artifact of mobile taking a front seat to the Web. So we end up with less-than-optimal Web experiences due to overuse of front-end JS everywhere; otherwise shops would have to build separate backends for mobile and Web. This, because an optimal Web backend tends to produce display-ready HTML instead of JSON for a browser-based client application to prepare for display. Directly reusing a mobile backend for Web browsers is suboptimal for most sites.
I've been a "back-end" developer who sometimes does "front-end" stuff for a long time. Both with web tech going back to classic asp, web-forms and those Java beans for JSF or whatever it was called, and, with various gui-tools for C#, Java and Python, and I think one of the reasons people use the "front-end" tools you're talking about in 2023 is because all those other tools really sucked.
I guess NextJS can also be server side rendering, but even when you just use it for React (with Typescript and organisation-wide linting and formating rules that can't be circumvented) it's just sooooo much easier than what came before it.
Really, can you think of a nice application? Maybe it's because I've mostly worked in Enterprise organisations, but my oh my am I happy that I didn't have to work with any of the things people who aren't in digitalisation have to put up with. I think Excel is about the only non-web-based application that I've ever seen score well when they've been ranked. So there is also that to keep in mind.
I think this is heavily dependent on company focus (and to some extent - the data requirements of the experience)
Basically - I think you can create a much stronger, more compelling experience on a site for a person with a bad/slow connection with judicious usage of service_workers and a solid front end framework.
But on the flip side... Making that experience isn't trivial, requires up front planning, and most companies won't do it.
Finally, the employer decreed that moving forward all frontends had to be done in Angular (version 6 or 7 at that time) and I have to say... I don't understand the point you're trying to make.
The frontend stacks aren't particularly more complex then the equivalent application done with html templates and varying ways to update the DOM.
Personally I'd say they're easier, which is why UX also started to demand state changes to be animated etc, requests to be automatically retried and handle every potential error scenario, which was never even attempted with pure backend websites
Nowadays I prefer using Typescript for anything html related and would not use backend templates unless the website is not going to be interactive
> The frontend stacks aren't particularly more complex
I'm not making a point about programmer experience at all. I'm saying that for most uses of most sites, the fact that Angular (or similar) is running in the user's browser is making the user experience worse. Performance is worse, accessibility can be worse, and so forth. And (again, for most uses of most sites) there is no benefit to the end user.
Consider the blogs, brochureware sites, landing pages, and e-commerce product pages that absolutely don't need something like Angular that today nonetheless do include it. Most Web apps are much closer to those than to Google Earth, Facebook, or Spotify's Web player.
What if you don't just stop at "adding a add to cart button" ?
But in all seriousness, the web has websites, it has apps, it has games. Pick a tool that's appropriate for the job and forget about what is the past/present/future.
For example, my app has a main screen that needs to be client rendered. It also has a user settings screen that could be implemented as a traditional server rendered page with no JavaScript, except it's a lot more practical to build everything inside the same project and technology. Apps and their marketing pages are often put on different subdomains for the same reason.
Metaframeworks that blend rendering modes help users get a lighter page load where appropriate, with less developer effort.
Any little glitch, slowdown or unavailability is affecting you not only once on page load but potentially with every single interaction. To make it worse, a lot of backend interactions are not made interactively or synchronously where the user might expect to wait a little while, they are made in the background causing all manner of edge cases that make apps somewhere from very slow to virtually unuseable.
I guess it's that old adage that people will make use of whatever you offer them, even if they go too far.
but the page is loaded later because you have to wait for the server to perform this work. There is no reduction in total work, probably an absolute increase because some logic is duplicated. If there is a speed improvement it is because the server has more clock cycles available than the client, but this is not always true.
> Complexity is lower because the server does most of the work of generating the HTML so can often be implemented with a simpler and smaller codebase.
Huh? It takes less code to build a string in a datacenter than it does in a browser?
Removing or shortening round trips absolutely removes work. Sending you a page, letting you parse the JavaScript, execute it to find out the calls to make, sending that to the API, the API decoding it and pulling from the database, rendering the JSON and returning that, you parsing the JSON, executing the JavaScript and modifying the DOM
Vs
Pulling from the JSON and rendering the HTML, sending it to you to render
Seems like the latter has less total work.
The server is either building a JSON (or some other message format) response, or, it could just build the relevant HTML fragment. In many cases, there is no real increase in actual work on the server.
Conversely, the client side doesn't need to parse JSON and convert it to a DOM fragment.
There's solid reasons for both approaches, depending upon the context.
This does not 100% track with observed client-side performance. Another poster mentioned caching, which obviously reduces total work. I would also add shifting the work via pre-computation as another commonplace way to improve performance.
> It takes less code to build a string in a datacenter than it does in a browser?
The string build in a datacenter might be happening in a warmed-up JIT of some language, on a machine with enough capacity to do this effectively. By contrast, the browser is possibly the slowest CPU under the most outside constraints (throttling due to power, low RAM, multitasking, etc.). It is generally going to be better to do the work in the datacenter if possible.
Client-side rendering isn't immune to this. The server APIs they hit have to render the response in JSON after hitting the same kinds of backend resources (e.g. DB).
What is caching?
I like my react apps to be static files served from a plain HTML server.
As a user, I despise seeing a white screen with a spinner.
If it's a public site and you want people to find it (ie SEO) you really should be server rendering and caching on a CDN.
>ctrl+f “Effect”
>0 results
I only skimmed through the post, but seems like it’s ignoring the main reasons why CSR is needed?
Don't believe the tech itself is Anything but a sign of where the utilities are moving