And no, latency does not accumulate.
Because the browser requests assets in parallel as it loads the html.
Also, assets can easily be routed through a CDN.
You don't need modern tooling to prevent it. A server side build step to combine assets only makes things worse. Because on first load you get bombed with a giant blob of code you don't need. Or the developers get lost in the complexity of it and it becomes a giant hairball of chaos. The Twitter homepage takes 185 requests to load. AirBnB 240. Reddit 247. Look at the source code to see the chaos their build systems create.
Simply using a server side rendred html page and native javascript modules prevents all of that. The modules get loaded asynchronously. So the modules and their dependencies have time to load until all the html interface and the asynchronous assets like CSS, images etc are loaded and rendered. And then the modules can kick in.
old.reddit.com was 10 requests, 5kB transferred, 9kB resources I tried mbasic.facebook.com and interestingly it was (not logged in) only 1 request 1.1kB transferred, 1.1kB resources
I turned off any browser level blocking, but I do have some network level ad-blocking. I wonder why you get 111 more requests for (www.)reddit.com than I do.
Those(mbasic.facebook, old.reddit) are the old/basic interfaces I use regularly and both are requirements for me, I won't use their normal websites or apps so if they get shutdown I would leave for good.
But not all sites and apps can do that, and almost certainly not for all functionality and pages. Bloat isn't just about tooling, it's organisational too. Lots of teams all working within one product.
Modern ES modules can actually be worse if not bundled on on the server or by a build system. The browser doesn't know what to request until it starts parsing the previous response, that dependency tree can be large. That literally is accumulating latency. However with progressive improvement it's not much of an issue, but again not everyone can do that.
On top of that anyone still on HTTP 1.2 will only be doing ~5 requests to the same domain in parallel.
Point is, latency does accumulate, but it doesn't have to with well designed code structure, "modern" (rediscovered 90s) techniques, and modern browser features like preload and preconnect.
At least on newer browsers we're no longer universally loading a javascript interpreter written in javascript (though sometimes we still are!)
For web apps, what matters most is above the fold load speed + time to interactive.
In the era when React got popular but Next.js hasn't yet, we had really slow performing sites because it did exactly what you said. Then people finally figured out that pure client-side rendering apps are too slow so people started switching to server side rendering. Non-interactive websites switched to just static websites that load from CDNs.
Modern web apps and modern static websites are in a much better state than the 2014 - 2018 era.
For those curious, it's /, then css, js, then 2 svgs, 1 ico (favicon) and 1 1x1 gif. None over 10k over the wire (/ is ~40k before compression)
In modern apps, there's often additional assets loaded from Javscript calling endpoints or calling for more assets.
If you do server side rendering, there are usually fewer roundtrips.