How do you know how large the dataset is? All you know from my post is that a dataset that takes 10s to download (I'm indicating the size of it here!) takes under a second to filter and sort.
My point is that if your client-code is taking long to filter and sort, then your dataset is already so large that the user has been waiting a long time for it already; they already know that this dataset takes time.
FWIW, the data is coming in as CSV, compressed, so it's as small as possible. It's not limited by the server. Having it rendered by the server will increase the payload substantially.
I can't really speak for those sites anyway, or why they are so slow doing things on the client, but like I said, I've written client-side processing and used my 2011 desktop, and there has been no pegging of the CPU or large latencies when filtering/sorting data client-side.
> while absolutely massively complex and large server-rendered HTML loads and renders in an eyeblink.
I've not had that experience - a full page refresh with about 10MB of data does not happen in an eyeblink. It takes about 6 seconds. There's a minimum amount of time for that data to download, regardless of whether it is pre-rendered into `<table>` elements or whether it is sent as JSON. Turning that JSON into a `<table>` on the client takes about 40ms on my 2011 desktop. Sorting it again takes about 5ms.
For this use-case (fairly large amounts of data), doing a full-page refresh each time the user sets a new sort criteria is unarguably a poorer experience than a bit of JS that goes through the table element and re-orders the `<tr>` elements.
In this case, using server-rendered HTML is always going to be 6000ms whenever the user re-sorts the table. Using a client JS function takes 5ms. On a machine purchased in 2011.