Sure, but early web pages were quite rudimentary. A simple tool that allowed adding links and images would have been enough to start with. Later when scripting and styling were standardized, it could've evolved to support those features as well. WYSIWYG HTML editors did this relatively well, but they never solved the more difficult task of actually publishing and serving the content. The best they would do is offer file syncing over FTP, but actually setting up the web server, DNS, etc., was left to the user.
An entire industry of web builders and hosters appeared to fill this void. Services like GeoCities and Tripod were early iterations, and today we have Wix, Squarespace, Wordpress, and countless others. All social media platforms are essentially an offshoot of this, enabling non-technical users to publish content on the web. This proves that it can be done in a user friendly way, but the early web tooling simply didn't exist to empower the user to do this for themselves.
Imagine if instead of using a web browser, consuming web content consisted of a command-line tool to download files from a server, and then opening them up separately in other tools to view them. I love cURL and tools like it, but the reality is that this experiment would have never become mainstream if we didn't have a single tool that offered a cohesive and user friendly experience. This is a large reason why Mosaic was as popular as it was. It really brought the web to the masses.
> Then there are things like hosting. Hosting has always been easy to find, but it is harder to find something stable across time. And discoverability, which has always been an issue but at least centralized services mitigate some of that. And the whole social angle, which instantly makes it more complex to setup and manage.
Sure, but I think those problems would've been solved over time. We see them as difficult today because we always relied on large companies to solve them for us. Instead of search indexers that crawled the web, why couldn't we rely on peer-to-peer protocols instead? DNS was an established protocol at the time that already was highly distributed. The internet itself is distributed. Why couldn't discoverability on the web work in a similar way?
The WWW proposal mentions this point:
> The automatic notification of a reader when new material of interest to him/her has become available. This is essential for news articles, but is very useful for any other material.
This sounds very similar to RSS. So there were early ideas in this direction, but they never solidified. Or if they did, it was too late to gain any traction.
Today the decentralized/federated movement is proof that this _can_ work. Imagine if we had protocols and tools like that from the very beginning. My argument is that the reason we have the highly centralized web of today is because we lacked simple web authoring early on. Non-technical users would have learned to use these tools just as they learned how to use the web browser. Our collective mindset of what the web is and how to participate in it would be built on collaborative instead of consumerist ideals. We would still need some centralized services, of course (e-commerce would still exist), but it wouldn't be for simple things such as publishing content. I even think that the grip of the advertising industry would be far weaker, since they wouldn't be able to profit as much from our personal data. Users would have far more control over their data and privacy. Propaganda and mass psychological manipulation in general wouldn't be as prevalent as they are today.
But maybe all of this is wishful thinking by a jaded mind. :)