Your typical SPA has loads of pointless roundtrips. SSR has no excess roundtrips by definition, but there's probably ways to build a 'SPA' experience that avoids these too. (E.g. the "HTML swap" approach others mentioned ITT tends to work quite well for that.)
The high compute overhead of typical 'vDOM diffing' approaches is also an issue of course, but at least you can pick something like Svelte/Solid JS to do away with that.
Yes, I know that this can be made to work properly, in principle. The problem is that it requires effort that most web devs are apparently unwilling to spend. So in practice things are just broken.
An auction site I use loads in the list of auctions after the rest of the page loads in, and also doesn't let you open links with middle click or right click>new tab, because the anchor elements don't have href attributes. So that site is a double-dose of having to open auctions in the same tab, then going back to the list page and losing my place in the list of auctions due to the late load-in and failure to save my scroll location.
This is an implementation choice/issue, not an SPA characteristic.
> there's probably ways to build a 'SPA' experience that avoids these too
PWAs/service workers with properly configured caching strategies can offer a better experience than SSR (again, when implemented properly).
> The high compute overhead...
I prefer to do state management/reconciliation on the client whenever it makes sense. It makes apps cheaper to host and can provide a better UX, especially on mobile.
Just how low-spec and/or how much state-data are we talking about here? I ask only because I am downloading an entire dataset and doing all the logic on the client, and my PC is ancient.
I'm on a computer from 2011 (i7 870 @ 2.9GHz with 16MB of RAM), and the client-side filtering I do, even on a few dozens of thousand of records retrieved from the server, still takes under a second.
On my private app, my prospect list containing maybe 4k records, each pretty substantial (as they include history of engagements/interactions with that client) is faster to sort and filter on the client than it is to download the entire list.
I am usually waiting for 10s while the hefty dataset downloads, but the sorting and filtering happens in under a second. I do not consider that a poor UX.
1. Fetch index.html 2. Fetch js, css and other assets 3. Load personalized data (json)
But usually step 1 and 2 are served from a cdn, so very fast. On subsequent requests, 1 and 2 are usually served from the browser cache, so extremely fast.
SSR is usually not faster. Most often slower. You can check yourself in your browser dev tools (network tab):
vs.
Poster child SSR: https://nextjs.org/
So much complexity and effort in the nextjs app, but so much slower.
That's not really SSR though, it's partial SSR but then hydrated into a client-side React app, so a SPA. If you really want to compare try an htmx page.
SSR also has excess round trips by nature. Without Javascript, posting a form or clicking a like button refreshes the whole page even though a single <span> changed from a "12 likes" to "13 likes".