So it's 2023 and I've been concatenation all my css, and all my js, into a single css and js request since like 2005. So a call to the site results in maybe 3 or 4 requests, not 40. But I've noticed most sites don't do this. Perhaps because I was trained when we had 28k modems not 100mb fibre?
Http/2 makes a big deal of multiplexing, but its less convincing when there are only 3 connections already.
I guess I also don't have a million separate image files, or ads etc so that likely helps.
Same goes back when we could just load a single library from the Google public version (cough-jquery-cough) and then all the on-page JS was just in-tag handlers. Not that it was better but boy howdy was it faster to load.
The former is specifically for reducing requests, while the latter is more about UX.
Sure, it's different from the <map> element (which is officially an image map, as you say).
But it's effectively mapping out the sections of a large image that you care about and then using CSS to display parts of it
For css sprites, you could submit an image to a folder in the monorepo, and a couple of minutes later an automated system would update the image with all the sprites, and create constants that you could you in your templates/css to use the icons with the coordinates/size of the image.
(As Deno shows, ESM imports make a lot of sense from CDN-like URLs again. It just won't perform well in the browser because there is no shared cache benefit.)
edit to add: And this makes a large part of the load time (when not cached). Other than that, the largest part is the SSL handshake and the fact that the server seems to be a bit slow (WRT the fact that it serves static content; 200ms for dynamic stuff would be okay).
Still a lot faster than most of todays websites.
Because the trend went from having Webpack produce a 5+MB single JS bundle to splitting based on the exact chunks of code that the current React view needs, particularly to improve the experience of mobile users.
Cheers Bruce
As it stands your site has to go through a TCP handshake then send the request and get the start of the response - at minimum that's 2 round trips. It then gets the JS and CSS tags in the head of the HTML and has to request those - let's assume HTTP/2 so that's another single RT of latency (on HTTP it's at least two depending on if you queue requests or send in parallel).
On 215ms RT latency that is an absolute minimum of 645ms before your JS has even started to be parsed. For a large bundle with a load of vendor code at the top in practice were talking several seconds to getting to the code that makes the API calls to get the content needed to render the first page.
And this is before we talk about any Auth, images, opening a websocket or two and people using microservices as an excuse to require a request for nearly every field on the page... (Or worse multiple requests and applying some logic to them...).
There is a fundamental minimum RT bound to a static JS based web app that is a multiple of a server rendered page. If you cache the HTML and JS bundle it can be worth it but you still need to aggregate data sources for the page on a backend and/or host it in multiple regions.