I am saying that allowing for JavaScript to be dynamically downloaded and executed after the page is ready was a mistake.
You can build your Google docs, your maps, and figmas. You don’t need JS to be sent after the page is ready to do so.
Thinking about how the web was designed today, isn’t necessarily good when considering how it could work best tomorrow.
Not quite, I wasn't trying to make a bigger point about is/ought dynamics here, I was more curious specifically about the Google Maps example and other instances like it from a technical perspective.
Currently on the web, it's very easy to design a web page where you only pay for what you use -- if I start up a feature, it loads the script that runs that feature; if I don't start it up, it never loads.
It sounds like in the model proposed above where all scripts are loaded on page-load, I as a user face a clearly worse experience either by A.) losing useful features such as Street View, or B.) paying to load the scripts for those features even when I don't use them.
If you can exchange static content, you need very little scripting to be able to pull down new interactive pieces of functionality onto a page. Especially given that HTML and CSS are capable of so much more today. You see a lot of frameworks moving in this direction, such as RSCs, where we now transmit components in a serializable format.
Trade offs would have to be made during development, and with a complex enough application, there would be moments where it may be tough to support everything on a single page. However. I don’t think supporting single page is necessarily the goal or even the spirit of the web. HTML imports would have avoided a lot of unnecessary compilers, build tools, and runtime JS from being created for example.
Today, what you are saying is definitely a concern, but all APIs are abused beyond their intended uses. That isn’t to say we shouldn’t continue to design good ones that lead users in the intended direction.