I'm trying to show where it could go with htmx:
Hypermedia (in particular the uniform interface) is what made the web special and there's no reason HTML can't be improved as a hypermedia along the lines of htmx/unpoly/etc.
We don't need something new. We need to take the original concepts of the web (REST/HATEOAS/Hypermedia) seriously and push the folks in charge of HTML to do the same. Sure, browsers as install-free RPC client hosts works and is occasionally called for, but there is a huge gray area between "applications" and "documents", just waiting for a more expressive hypermedia.
For pete’s sake, just let your server serve html. Replace only the parts of the page that need to update… with html. From your server. The poor thing is just sitting there, idling, bored, borderline depressed because it doesn’t get much of chance to do what it is meant to do, which is, you know… serve. “But it serves JSON” … is like buying a Bugatti Veyron, just to park it in your garage and use its exhaust fumes to dry your clothes 6 days of the week, with a 2 minute drive around the block on Sundays.
That is a very creative analogy, but not one I agree with.
My servers should do as little as possible. They should not serve files just for the sake of it, because they have all these shiny serving features.
And I like JSON as it is not as verbose as html and I can directly work with it in javascript. And then update my html where I need it locally.
If you prefer it differently, than this is fine. The web supports many ways.
You do realize this means all the user data has to be sent to the cloud? Which is what we want to escape? Hypermedia was created for bots for discoverability, I don't know why anyone would think this is an user-centric concept.
The reason it, and the many other cool libraries can exist is borne out of the strength of the web, namely a solid, accessible foundation of protocols and formats with extensibility via JavaScript.
This last part is incredibly important, because it gives us enough power to create and explore new things, while the web standards can conservatively adopt new capabilities.
The web is working as intended, and while there are many issues and lacking areas (which htmx illustrates beautifully), we can take a step back and appreciate how amazing and empowering a platform it is.
https://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arc...
and it has proven to be an incredibly flexible aspect of the web
however, I would note, currently that scripting ability is being used mainly to replace the original hypermedia model of the web (with RPC-style JSON-based applications) rather than enhance it as a hypermedia
we'll see if we can change that
the irony is that DELETE, PUT and PATCH are used today almost exclusively in non-hypermedia JSON data APIs
crazy world!
not much; while you need a little (easily reusable) JS code to use and handle responses to other methods, it's such a tiny fraction of the dev effort that goes into web apps that it makes no meaningful difference.
It would be nice if HTML forms supported other methods for completeness, but it wouldn't save all that much real-world development effort.
These days HTML is really only viewed as a payload carrier for your JS application so doubt that'll ever change either.
To add to both of your arguments, "JS-routers" like SvelteKit/NuxtJS/NextJS are literally reinventing server side rendering for the client to then call the actual server to get data ... to render HTML.
HTMX, LiveView, Livewire, Hotwire etc are escape hatches back to sanity.
You can perhaps use Alpine.js plus sprinkle htmx here and there, to have the best of both worlds, at least on paper. Didn't try this myself.
I like Gemini, a lot, and I think it could serve as a "document web" very well, except it's missing certain document features like italics, bold and superscript. If you want a document format that serves the average user's needs, you need those features, period. With superscript you don't need inline links for citations.
An "application web" could just be a UI framework that renders an easy to write and read markup of some kind and fetches it over an internet protocol.
The problem with the distinction is it doesn't solve another core source of the problem, specifically, the reason document delivering websites send applications instead is that they want to track users and serve targeted ads. What's to stop CNN from just sending a message in your document web browser saying "please use your application web browser to view this page"? And that's precisely what they'll do, and then you're in the same boat we are in right now, minus the complexity possibly, and nobody will use the document web at all. How is this problem solved?
A dedicated document browser would likely almost always blazing fast, regardless of the machine it was run on and the network over which the documents were transferred over. It could have features that would make no sense in a web browser but greatly enhance the experience of reading and navigation. It could reasonably cache nearly everything the user visits (since it's practically all read-only and small) to reduce server load and increase speed. In a nutshell, it could be far better at specifically working with documents than a web browser ever could, even with a laundry list of extensions installed.
Additionally, it would actually be possible to write brand new competing document browsers due to the vastly more simple specification, which is something the web will probably never have again.
Then you use Adobe Flash. See, we used to have a decent technology for when you want "a document with a slight bit of interactivity". It worked very well for this exact purpose. More than 10 years later, browsers' native capabilities, that are supposed to be a replacement for what Flash offered, have still not quite caught up. Moreover, Flash defined a clear boundary between the document and the application parts.
Did the particular (Adobe's proprietary) implementation of Flash player suck? Yes, sure. Could this have been done better? Yes, sure. Is it possible to reimplement a Flash player from scratch within a reasonable timeframe with a small team of developers? Yes, sure, Ruffle[1] is a thing, and it's being actively developed.
I miss Flash. I hope it makes a comeback eventually.
You aren't allowed to have that, because it's not profitable for the web developer crowd's bosses. So your "slight bit of interactivity" becomes "several megabytes of surveillance and advertising".
And the anecdote about adding the tracking code for one thing, but all the other market segmentation information getting turned on afterwards because it can.
But yeah, I do want comments on my blog posts, and hn linking to docs.
I don't see why a document couldn't have a link to an application that opened next to it, so that you could view documentation and an application together. We should utilize the organizational ui paradigms built into our operating system, not use the system as a launcher for Chrome.
The effort is being done whether its all in one application or split into two.
It has to be siloed because if it isn't, you won't be able to find the document you want to read without running application code, as we see with the modern web. It's not a search issue, I find the documents I want to read just fine, it's just that they come with megabytes of executable code I don't want to run.
It would be nice if a client could only implement the rich stuff that it wants to, unfortunately, when the rich stuff is all different everywhere, it means only clients that support all of it get used, hence the current state of web browser development.
I tend to like today's web (a bit too corporate for my taste, but I like the potential for variety). But there's no reason the modern web can't look like the old web with the right filters.
A document web is dead in the water if it does not at least support the things that are common with physical documents. That means inline images and at least some control over layout.
Gemini throws out way more than what would be justified with the document/application distinction and as a result doesn't have a chance of meaningful adoption. Maybe that's OK for the people behind Gemini, but that still leaves the role of the "document web" for the rest of us.
Imagine a browser that throws out almost all CSS the moment it sees a form tag or a line of javascript.
index.aml with an application object model. No idea what it would look like, but I’d love for html to just be allowed to be html.
There are ANSI escape codes for these features. It's not missing.
Write a document website with a Patreon like gwern.net instead of some bloated ad-sponsored clickbait like BuzzFeed.
You might say there will be solutions like ideas to augment the canvas with meta data about what's in it but that IMO misses the point.
As a user I want to be in control of the data that's on my machine. With a standard like HTML, it allows me, as a user, far FAR more control then a native app. I can use userstyle sheets. I can write and/or install extensions that look through or manipulate the content. None of this is possible if all I get is a rectangle of pixels.
Translation and/or Language learning extensions would never have happened on native platforms because it's effectively impossible to peer into the app. Whereas it's super easy to peer into HTML based apps.
So, I like that most webapps work via HTML. I also like, that unlike most native UI frameworks, HTML has tons of relatively easy solutions for handling the differences between mobile and desktop. I've made several sites and web apps and while there may be a learning curve it's 2-3 orders of magnitude less work to get something working everywhere (desktop/tablet/mobile) with HTML than any other way of doing it.
Change to 2 modes, document mode, and app mode doesn't make sense to me. It'd be a net negative. People who want this I think want to take control away from the user. That's clearly Flutter's goal. All that canvas rendering means it's harder to select text, harder to copy and paste, impossible to augment via extension. It puts all the control in the app dev's hands and none in the user's.
So no, I don't want the web to be reinvented. Its provides something amazing that pretty much all attempts to replace it seems to missing. Its structure is a plus not a minus.
I mean if modern tech people had their way, the web would have never been anything but a bare data API on a blockchain, and no one without at least a bachelors' degree in CS or engineering would even know about it. And oh yeah, you'd need a license to publish anything.
They're looking for a technical solution to a social problem. They miss the Web as a space for people only like themselves. Having to share the web with normies who don't create out of _love_ or don't spend hours researching a small change like they want everyone to means folks different from them end up inhabiting the web. It's thinly veiled gatekeeping, a desire to make the web a space where only folks like them would inhabit.
In the heyday of the web, people weren't doing anything for money and it created good content. Where is the good content now? A billion useless "Best $product to buy in $current_year" articles with 20 affiliate links to drown the web in SEO spam. Commercialization is a race to the bottom. Good art and literature was created by people who had an urge to create and not by people who had bills to pay.
Real artists have day jobs.
They probably do, but not at today's complexity of CSS and not in constant defense of every website with atrocious over-design.
I do not agree. Many users do want it. The problem is that web browsers are not written for advanced users.
(Furthermore, there are other protocols for other things, such as IRC, NNTP, etc.)
> They don't want a different markup language.
The actual problem is that even if you use a different markup language, you cannot easily serve it and allow end user customizations to decide how to display it (possibly using a more efficient implementation than the HTML-based one), and you will be forced to serve HTML instead, making it more difficult to write an implementation that does support the other formats.
You could use <link rel="alternate"> to link to the source document, or you could have my idea of the "Interpreter" header, which also allows to polyfill picture/audio/video formats in addition to document formats.
Not only are you in a tiny minority of users who wish to do things like this, but most web applications out there go out of their way to stop you from doing those things and creators would almost certainly rather have a UI platform that did not allow you to do those things.
I am also one who wants to do things like this. Web browsers must be designed for advanced users who are assumed to know what they are doing better than the web page author.
It is worse. Even if I am a author I cannot easily use a format to allow end users better customization if they provide their own software to do so; they want to insist you to do it by yourself, regardless of what the end user wants.
However, it is possible to use mainly the existing HTML standard, to make something that would allow more user control.
Some possibilities include:
- ARIA mode. Use HTML and ARIA to display the document, using user-specified styles and behaviours, instead of using the CSS included in the document (with some exceptions, e.g. if it specifies a fix pitch font, then it can use a fix pitch font).
- Interpreter header. If it were implemented even in common web browsers (without requiring TLS), then it can allow to serve any file format and if the client software does not understand it, it can polyfill the implementation.
- Use of new HTML attributes, e.g. feature="usercss", feature="uploadname", rel="data", <html application>, etc. Existing implementations would ignore them, so it does not break compatibility.
- Use of existing HTML features, e.g. <link rel="alternate"> (to make available alternative file formats), etc. Clients that do not implement them can ignore them.
- Design web browsers for advanced users. Many Web APIs will be implemented differently (or not at all), e.g: when asking for a file to upload, also allow the user to override the remote file name, and may allow a system command line pipe (like popen) specified; when requesting microphone or camera access, the user specifies the source (again using popen, which may be used even if the user does not have a microphone/camera); etc.
- Other features in an improved web browser, e.g. key/mouse quoting mode, manual/auto recalculate mode (manual mode also prevents spying from unsubmitted forms and stops autocomplete from working), save/recall form data on local files (as a user command), HTTP basic/digest auth management (also as a user command), script overriding (also mentioned in a article written by FSF), etc.
- Request/response header overriding option for user setting. This makes many other options unnecessary, because you can use this option instead, e.g. language setting (Accept-Language), tracking (DNT), JavaScripts and other features (Content-Security-Policy), cookies (Cookie, Set-Cookie), HTTPS-Everywhere (Strict-Transport-Security), etc.
- Meta-CSS, only available to the end user, which can make CSS selectors that select CSS selectors and properties and modify their behaviour.
- Improved error messages. Display all details of the error; do not hide things.
- Some properties may work differently, e.g. if you try to access the height of the view, to return Infinity instead of the correct number (or define a getter on that property which throws an exception instead of returning a number).
- The structure of the design of the web browser in the backward way from usual: I think that a better way will be, that the components (HTML, individual commands within HTML, CSS, HTTP, TLS, JavaScript, etc) are separate .so files (extensions) that are then tied together using a C code, that a user may modify and recompile, or rewrite (and changing the components too if wanted) to make the web browser in the way that you want to do.
- External services which provide information and APIs for accessing APIs of other services including using command-line programs, that users may access to use them.
In practice, I suspect if this route were to become more common, then frameworks would provide these sorts of tools directly. The browser still couldn't directly see the component structure of your code, but a framework might provide a browser extension that it itself can hook into, so that you can inspect it as its running. The problem then is that each framework would have to do this separately, because there would be no single base component structure that the browser recognises.
Essentially, you'd go back to developing as it's done for desktop environments - Qt or GTK might provide their own debugging environments, but the operating system itself is generally running at a much lower level, and so can't help you much if you want to know why "foo" is being rendered instead of "bar".
You have to start from the user's experience, because nothing else matters about a computer other than what you use it for. What do people actually want to do with a computer? It's going to be very hard to throw away all your assumptions. Don't assume that they want or need what we have today. They may not need the internet, or even a visual interface.
After you understand the problems the user wants solved, then you can go about building products that address those needs. You may think technology is clay that we can shape to fit a user's needs; but what if what they were best served by was metal, or glass, or paper? We must look through all technological mediums and components to provide the best solution.
Many of the developers today literally have never lived their lives in a world without certain inherent technological expectations and limitations. We need to open their minds and show them that they really can make literally anything with technology, and that it doesn't have to resemble anything that we have today. If you think "oh but we can't make change too radical, it wouldn't work", that's what they said before we ended up with the technology of today! The only box you're limited to is the one you put yourself in.
There's HTML, an ok, fairly adaptable hyper-media markup- and CSS, and scripts, all fair or better- but the premise turns everything else we do in computing on it's head: servers send us resources, and we have a (imo pretty great) engine to render & execute our hypermedia.
I see endless torrents of anti-web attitude, from people who want more content-centric systems, from people who want applications. But almost never can the critics identify & mark out what makes the web better, different, & so flexible as to have arisen into all-pervading ubiquitous. By all means, consider first principles. I think you all have a lot of very important & enabling concepts you'll need to recreate along the way. The web's idea of urls & resources is one I think we'll have a hard time replacing.
The sad fact of the matter is that people play politics with standards to gain commercial advantage, and the result is that end users suffer the consequences. This is the case with character encoding for computer systems, and it is even more the case with HDTV.
Speaking only for myself, I'm not at all invested in having a website. Yes, I want a publicly accessible site of my own on the internet with which to share documents, and I currently do so over HTTP, but I would equally happy to share them over Gopher or even anonymous FTP.
However, I'd rather pay NearlyFreeSpeech.net for hosting than run my own VPS or self-host on a machine in my basement because I'm lazy, so I'm stuck with HTTP because they explicitly don't support anonymous FTP (and by implication don't support Gopher or Gemini).
You can serve other file formats (including text/gemini) over HTTP. Although I could manage to make the web browser I have to support text/gemini files over HTTP (and local files), this isn't commonly done, and the protocol does not support file format polyfills (although I have a proposal that can make it work).
However, that isn't good enough if you want to serve NNTP or IRC or Telnet or something else like that. If I want to serve discussion forums, I will want NNTP. If I want fast communications, I will want IRC. There may be other possible protocols too. We shouldn't force everything into HTTP(S); it doesn't fit properly. (I do have a NNTP server, as well as HTTP and Gopher.)
No, the technology of today developed extremely slowly for the most part, it evolved in tiny, tiny steps, sometimes taking centuries.
Its not 100% clear to me what the author's thesis is - but i think its that ever incrasing complexity of web standards results in bad user experience.
I dont think that is true - or as they say correlation does not equal causation.
The web used to be smaller and the corporate world didn't know what to do with it. It was creative and original. Eventually corporate america figured out what to do with it, and now it gets value extracted in a very impersonal way - as if your fave underground punk band sold out.
Complexity didn't cause this. It might be a symptom of this, but if the complexity went away, the corporization of the internet would still be there - the suits understand the internet now, and there is no stuffing that back in the bottle.
If neccessary, it would be entirely possible to recreate facebook in html 3.2. Its a social phenomenon not a technical one.
The post hypothesis is that [something is wrong with the Web] because [browser functionality]. But it's not browser functionality that is the issue here, it's the site author. The premise of the problem is wrong, thus the proposed solution makes no sense, and it will fail.
This is easy to demonstrate. Install a second browser. Turn off JavaScript completely. Disable cookies completely. You now have your minimal document viewer. Done. No need for a new protocol, no need to create anything.
Now feel free to view all your static-site destinations in this. It'll be rocking fast. Then use your regular browser for everything else.
I suspect you'll find that the browser is not the problem. The problem is that the sites you want to visit use JavaScript. You'll discover that your alternate browser is basically not used. Oh and that the sites it can use are equally fast in your main browser.
The browser is not slow. Sites _choose_ to be slow as the cost of other things they want (mostly advertising). Any site you think is slow in your browser would not be in your other Web.
Do new developers choose inappropriate tools to build simple stuff? Yes of course. Do they go with what they know best, yes of course. Do most senior developers do the same thing? Of course yes.
Is the solution a new protocol and a new browser? I fail to see how this would change anything.
People miss the old days, without realising that those days only happened because no one saw the monetary potential in a medium, and the few people with a deep interest in the field (and enough resources to work for free) created stuff as a hobby.
There is a single major problem: advertising. It is the default monetization path for most of the web, and it has become an unstoppable force of sites that are hostile to the user, siphon as much user data as possible, and use every marketing tactic available to trick the user into clicking on ads or agreeing to being tracked.
It has spawned a multi-billion dollar market of shady data brokers that sell user data to the highest bidder, and built an industry of adtech giants.
It has required passing laws to protect the user, and even those were late, not strict enough, impossible to enforce, and only available in small regions of the world.
It is pervasive and offensive, and this has to change. Progress is very slow on this front, but we need new monetization options that are as easy to use for both the user and content creator, and respect the user and their privacy rights.
Say what you want about Brave Inc., the BAT[1] is the most compelling of such alternatives. It currently uses user-friendly ads, but those can be bypassed by funding the wallet instead.
I'm curious about other alternatives to advertising, not about another web framework.
Make good work, people will pay to support it. Works for NPR.
The "document web" would be something very much like Google AMP. Those documents will need to be indexed so Google will get to choose what it looks like, what it does and how it tracks you. The "application web"... well, Apple could probably graft native app functionality into a URL, like App Clips. Google would do the same for Android, probably. Microsoft similar. Those three companies control the OSes everyone uses, so they'd get to choose.
And so on and so forth. More and more it feels like the web we have today is a miracle even though it's broken in many ways. We're very lucky to have a platform as open as the web is and we take it for granted at our peril.
This person needs to go look at some Geocities archives. People were trying to create cool-looking pages as soon as we had the IMG tag. People were using tables to organize a bunch of images in neat ways. People were doing weird little hypertext art things.
They were not using the technology in the "right" way. Nobody gave a shit about the right way. It took something like twenty fucking years for CSS to make it easy to vertically center shit the "right" way after persuading everyone that using tables for anything but tabular data was "wrong". If we waved a magic wand and suddenly everyone just had this theoretical Markdown-centric browser? People would immediately start looking for clever ways to abuse edge cases of its implementation to make their pages pretty. And people would start making enhancement requests, some of which the browser makers would inevitably implement, and... eventually we're right back where we are now.
I am not a fan of the modern corporate-dominated web, neither am I a fan of the modern world where more and more apps are horrible kludges of JS and HTML that have absolutely no care about the UI conventions of their host platforms, and are a couple orders of magnitude more resource-hungry than their equivalents that use native widgets and compiled languages. (Concrete example: Slack's 440 megs on disc, Discord's 370; Ripcord, a native app that talks to them both, is 40.) But thinking people will happily go back to a world with little to no control of the presentation layer after decades of struggle for more of that is a pipe dream. We all quit using Gopher the moment we had a web browser on our systems.
I'm not sure we would have gotten where we are now without those limitations for people to conquer.
There's nothing in http that requires a browser. We could have released multiple app platforms by now. But there's a magic in html, css, and js that is taken completely for granted. I remain a fan.
...Edit to add that we have added these platforms. And they can't come close to what we have on the web.
Roku, Apple TV, Nvidia Shield. These are application platforms. And they're ok for what they are. Android and IOS are the most successful connected platforms we have, and they're impressive, but I still spend more time in my browser than anything else on my mobile device.
> thinking people will go back to a world
it's not even going back. the main portal to the web is now apps on phones. the web will continue to evolve, but i can't imagine a scenario where it forks.
the anarcho liberal wild west days of the web are behind it, unfortunately. i lament its passing along with a whole host of others who grew up in the land of bbs, usenet, and the like.
or some early net-art featured later on things like rhizome.org
While reading, i was thinking: either this person is my ch younger than me or we live in two parallel universe :)
The great thing is: i still seem to agree with fundamentals… :)
There is a fundamental conflict in human nature between the need for freedom of expression and need for structure - and a balance to be found. The latter won in Facebook vs MySpace, and while I liked the clean UI and structure to content that Facebook brought, today I would very much prefer to again see chaos of people's expression in MySpace pages than chaos of bland, ad-riddled, and structurally overpopulated Facebook profiles.
But in what I said above, the shift is actually between something else entirely - from web being individual, to being corporate. It seems to me that we got to where we are on the web solely because of hyper-capitalism seeping into it, like into all other pores of society. There is a string of reasons Slack is 440MB - it starts with it being pushed to use a combination of technologies that are deemed to ensure quickest iteration, time-to-market, interoperability with user tracking systems etc (in this case, Electron / JS for multi-platform coverage etc); then you have more and more people using those technologies because they are sought after; then companies want to use those technologies even more everywhere because it means you can grow your teams faster on the market. Btw, all exactly the same reasons as to why almost every "website" today is a React app. All the while, the more information you can collect on people, the more attractive you are, even if you have absolutely no need for it or way of using it in your product for now - so almost every website also comes with 50+ XHRs on load to every imaginable tracking service. All in function of marked words above - "growth", "market", "faster", "user tracking" since those are the ones that are exclusively rewarded (not even with actual money any more - with market evaluation and other perversed and illusory constructs).
So it's not the technologies or the "web" themselves that are the problem at all - it's that hyper-capitalism hijacks them, requires targeting widest market, being quick to monetize on it in any way etc - which means experimentation and individual expression lose value, and any benefits for the user that don't obviously result in benefits for the corporation (e.g. user's disk space usage for banal Slack example) are cut off from consideration. It's only expected then that, when you look from individual perspective, the web is not individual-friendly anymore.
The force of FOMO that built up behind that wall, once removed, flooded the population.
You can say it was their style or whatever, but I think it was that one move.
I like simple, and I like to be able to see changes I make in real time. React makes a lot of things more simple, so I use it often.
But maybe Markdown is the document language you're looking for? And to get started writing docs and code together, something like ObservableHQ is a pretty good way to go.
Previously discussed in: https://news.ycombinator.com/item?id=24255541 (117 points, 21 months ago, 90 comments)
A Clean Start for the Web - https://news.ycombinator.com/item?id=24255541 - Aug 2020 (90 comments)
A Clean Start for the Web - https://news.ycombinator.com/item?id=24250252 - Aug 2020 (1 comment)
A Clean Start for the Web - https://news.ycombinator.com/item?id=24247362 - Aug 2020 (3 comments)
The author makes a good start at fleshing out this argument, but stops short. He goes on to talk about a gizmo that was added to his own work to track use of a feature, which then was abused by others in the company.
The problem is that the same motivation and tools that are used by developers to figure out how to make their products better are the same motivations and tools used by advertisers to sell crap and snoop on users. Exactly the same tools, very different outcomes. One improves the experience while the other degrades it.
It's not clear how to separate the two, allowing only the "good" uses of tracking technologies while preventing the "bad." For its part, the essay doesn't really provide an answer so much as start talking about technologies.
But if you really want to improve the Web, it makes a lot of sense to really drill down into what makes the Web bad today. It is really technological complexity, leading to rendering engine monoculture? Or is it something more sociological in nature?
Then CSS needs to be trimmed down to basics too. Why is grid layout implemented in CSS instead of a <grid> tag, it doesnt make sense and is very hard to read.
Then there's the browser, which has remained fairly consistent and provides lots of functionality for navigation that is generally OK and well understood by people. SPAs destroy that by building another virtual machine layer on top of it (my pet peeve is ?force_cold_boot=1)
And then, why is WebRTC so complicated ? do we need to support all those codecs and nat traversal rules forever? Video communication should be as simple as adding a <videocall> tag
HTML should be readable by common people, they should be able to easily experiment with it, that's how the fun begins again
Just as described in the article: as soon as the opportunity presents itself, the business suits start mucking with the implementations and strategies, and at that point either the technically mature debating communicators step in and end the off-goal meddling or the entire process and effort is lost.
We do not have it in us anymore. We are not the generation that went to the moon. We're the consumer marketing ruined follow-on generation with a diverse mind filled to the brim with web3/crypto/socio-political-religious-end-of-the-world panic.
Case in point: https://www.cnn.com/2018/06/13/health/falling-iq-scores-stud...
> Then, you need a browser.
I can't not think of the Alan Kay talk "The computer revolution hasnt happened yet" from 25 years ago. Some quotes:
"HTML on the internet has gone back to the dark ages because it presupposes there should be a browser that understands its format."
"...ever more complex HTML formats, ever more intractable."
"You don't need a web browser".
And in fact, this was correct? The model that we have, now has a spec so enormous, that it's common knowledge that it's impossible to build a new browser from scratch (although maybe somebody can prove common knowledge wrong). And having document nodes that create a DOM tree might be useful for... something... but it has proven to be a gigantic obstacle in delivering content on the web. Almost everything we do today tries to ditch this because it's just so complicated, but all the frameworks that work around this give us magic and deal with it in the background. It's therefore still slow and it will always be slow.
So maybe to fix things we should try something other than what we already had that didn't work.
An "URL" could instead be something like an application+datasource pair. The applications are automatically hashed and signed and you can provably verify they haven't changed if you want to. When you request an application to the network, you'll download it from the closest peer (maybe from multiple peers at the same time!). Since it's hashed and signed you only have to trust the signer.
Applications should be written in a single language, no more HTML+CSS+JS. Yes, we can and probably still should have separation between layout, style, and logic, but it should all just be in same language. In fact I've started exploring what that language could look like: https://flame.run/
Let's take the good parts of the current web and build a new foundation.
I had a idea that is a bit similar, which is a Interpreter header; if the MIME type is not understood by the client software, it can download an interpreter (i.e. a polyfill) (written in HTML or WebAssembly, although your own language might also be a possible alternative); if the client software does understand the format (or the end user already has their own implementation installed, possibly one that they wrote by themself), then it will use that one instead.
> You can hardcode/bake the content into it (like we do today).
While you can, it may make sense to be a separate block so that you can also use the data separately.
> An "URL" could instead be something like an application+datasource pair. The applications are automatically hashed and signed and you can provably verify they haven't changed if you want to. When you request an application to the network, you'll download it from the closest peer (maybe from multiple peers at the same time!). Since it's hashed and signed you only have to trust the signer.
That is a interesting idea, and could work (although making the URL longer, probably). Although it may be useful to add additional arguments too, like command-line arguments/switches (so an implementation could be launched by command-line arguments, too).
This would easily allow an end user to substitute their own application implementation too (by changing the "application" part of the URL), which is also an advantage, and can have a rewrite system to do their own. If it has a hash then you could also cache the application file if wanted (and if you do not have a working internet connection, even to operate on local files that the user gives permission to access), in addition to overriding it with your own implementation, too.
> In fact I've started exploring what that language could look like: https://flame.run/
I like this idea and a few days ago had started to make up the design of something called "VM3" (the name might or might not change in future), which has some similar goals and ideas. However, there are many differences too. VM3 is a binary format (designed to be suitable for both interpreter and for JIT, an static analysis; e.g. there is no way to treat a number from a general register as a return address or vice versa, other than using a static lookup table which is specifically designated as containing program addresses), and all I/O (including the equivalent of JavaScript's Date and Math.random) must be done using extensions. Extensions must be statically declared (cannot be dynamically declared), and an implementation must allow the end user to manage and override them too; polyfills (both included and installed separately) are also possible, too. VM3 can be used with any protocols (HTTP(S), Gemini, IPFS, etc) and any storage media (CD, DVD, etc). (VM3 is also not limited to a specific kind of user interface; you can have command-line, GUI, pipes, etc.)
Someone else also had wrote some ideas about TerseNet. Some ideas of VM3 are similar, and the capabilities of TerseNet could be implemented as a subset of VM3. Multiple implementation types are possible, and possibility of static text, and applications that are only lauched by the user, also has with TerseNet and VM3.
However, unlike Flame which it seem your idea to combine effectively HTML+CSS+JS together, TerseNet and VM3 do something a bit different: parts of HTML (the documentation format) are one format, and other parts of HTML+CSS+JS are the other format; they can easily be separated and work independently.
This can't be understated. I'm a web-dev noob. I'm doing simple stuff, HTML/CSS/JavaScript is what I've decided to learn after months of confusion about this or that framework which at the start I was very confused about until I found out they are all just JavaScript abstraction.
I like the web for different things but they are in a bit of a state of conflict. Things like Wikipedia are virtually unaffected by webassembly while things like Twitter and Facebook might make good use of it.
The future has yet to be written but like the article I kind of just know we are in a transition period from what we have to ...something.
That’s the only way monetization would work, other than selling personal data or relying on goodwill. And like it or not, profitability is a necessary part of writing for the web, for many people and almost all organizations.
We got HTTP 402 and nobody used it because it was never standardized (I guess).
edit: added leak link.
edit 2: Turns out to be fake, confirmed by mozilla employee[2]. Stupid 4chan.
[1]https://boards.4channel.org/g/thread/86921913/
[2]https://old.reddit.com/r/firefox/comments/uovcdh/only_a_rumo...
At this point, Mozilla clearly saying they are stopping Gecko or Gecko-based Firefox development might best thing that can happen to FF -- that is if it will be enough to get others to band together and finally replace Mozilla with an organization that actually cares about the needs and wants of its users.
(I haven’t heard anything about this, and from what I do know of Firefox I can’t see how anything even vaguely like what I think you’re describing would be technically feasible.)
https://arstechnica.com/gadgets/2019/05/25-years-of-hypercar...
It was a delightful foundation for the document web!
> Rule #1 is don’t make a subset. If the replacement for the web is just whatever features were in Firefox 10 years ago, it’s not going to be a compelling vision.
If you don't want a subset, and you like markdown, then gemini seems like the answer. It is definitely gaining popularity but I'm not convinced it's enough to convert the general audience.
I think supporting everything but javascript for the "document browser" would automatically enable a ton of websites and make it easier for creators and consumers to use the tools and languages they already understand.
There are some merits for that, but I think that is both excessive and deficient at the same time.
> https://erock.lists.sh/browser-monopoly
I think that these are valid points, which I have some comments relating to.
> Support HTML 5, CSS 3, HTTP, TLS
You can add other file formats and protocols too, such as Gemini, Gopher, and possibly Markdown too.
> Maybe remove website specific styling altogether and instead design a consistent design that optimizes navigation and readability.
That is what I think too; even the HTML with no CSS, will be OK. Perhaps let the end user to specify colours, fonts, etc.
> Allowing the user to query for information and have that data displayed without full page reloads (AJAX)
For data-oriented stuff, you could also have such things like <a rel="data">, you can just get the data and use your own software to display it.
> I want a minimal, modern set of browsing tools where I don't have to make any sacrifices between usability and compliance with the standards.
Yes, I think so, too. Splitting all of the components separately is one start, I suppose (although it is not good enough by itself) (a C program can then be written to tie them together, and this can be changed as needed, including to add/remove components) (this is like reversing core and extensions the other way, so that e.g. HTML and HTTP are now extensions instead of core). However, half of the standards are I don't want quite compliance and should deliberately implement them in a better way than what they say.
(+) stuff for me = websites, not web applications.
For Web applications, maybe all this vue / react / whatever stuff is worthwhile, I don't know, I'm not qualified to know (although let's face it, developers possibly might have a slight tendency to maybe over complicate things...just sometimes...)
But I struggle to see what in a normal website experience isn't more than provided by these basic tools.
Oh, and great content. None of any of this matters if you can't write...
The reason we got centralized was simply who pays to power the chips!
An app is an app, a website is a website, don't try to make them the same thing always.
WebAssembly is not a stand-alone application development package. It doesn’t have many of the functionalities of Flash, Applets, Silverlight or even NaCl. It has no API on its own to interact with display or keyboard and mouse. and none of the features you find in a VM such as multi threading or a memory allocator
Maybe I'm wrong here, but I struggle to see how one can arrive at that conclusion. I imagine a large percentage of users actively chose to not use Firefox over Safari because of it's integration and their familiarity.
It would also simpler for the app developers who would program against a simpler model than that of web apps.
And we can use the Web to deliver information i.e. documents.
HTTP and SMTP are fine however.
I have stopped coding .html and only use HTTP now from a native OpenGL client written in C.
> I have stopped coding .html and only use HTTP now from a native OpenGL client written in C.
How is that working? Do you have any further details?
IRC is replacable with HTTP.
It's a 3D MMO: http://talk.binarytask.com/task?id=5959519327505901449
It is untrue. You can write a plain HTML (even without CSS) and it will work OK.
> Webpage size growth is outpacing it all.
It is true, unfortunately. They waste too much space by adding too much extra pictures, CSS, JavaScripts, videos, advertising, etc. You should not need all of that stuff.
> Not only is it nearly impossible to build a new browser from scratch, once you have one the ongoing cost of keeping up with standards requires a full team of experts.
Yes, this is the real problem. However, some of the standards simply should not be implemented, and some should be implemented differently than what the standards say, to give advanced end users better controls and improve efficiency and many other things.
> We hope that all this innovation is for the user, but often it isn’t.
That is true, it isn't. To make it for the user, design the software for advanced users who are assumed to know what they are doing and that the end user customizes everything. Make documents without CSS; the end user can specify what colours/fonts they want. Make raw data files available; the end user might have programs to view and query them (possibly using SQLite).
> There is the “document web”, like blogs, news, Wikipedia, Twitter, Facebook.
I do not use Facebook, but I can comment about the others. There is NNTP, too. Wikipedia (and MediaWiki in general; not only Wikipedia) uses a lot of JavaScripts and CSS too; you can add your own, but removing the existing ones and replacing by your own will be difficult. If you want to replace the audio/video player with your own, can you do it? What is needing is making actual proper HTML, or EncapsulatedHyperEncyclopedia format, perhaps.
> Basically CSS, which we now think of as a way for designers to add brand identity and tweak pixel-perfect details, was instead mostly a way of making plain documents readable and letting the readers of those documents customize how they looked.
I still often disable CSS, but sometimes result in big icons and other things still wasting space. Maybe if disabling CSS would support ARIA and other things might to actually make an improvement.
One idea that I have is that if your document only uses classless CSS and that a web browser that supports the semantic HTML commands (and not the presentational commands) should display it suitably, specify a feature="usercss" attribute in the <link> and/or <style> element that specifies the stylesheets, so that a web browser can understand to use it. This way, it will be up to the end user to set their own styles as they want to do and can effectively use it in favour of the author's one without breaking it.
> Though it’s going to be a rough ride in the current web which has basically thrown away semantic HTML as an idea.
Semantic HTML can be a good idea, and still is sometimes used, even if it is often ignored (and sometimes not implemented in the client side, too) in favour of bad stuff instead.
> Rule #3 is make it better for everyone. There should be a perk for everyone in the ecosystem: people making pages, people reading them, and people making the technology for them to be readable.
Yes, it is true. For people reading pages better, do not have any styles specified in the document. Let the end user to specify their own colours/fonts/etc, and different implementations can render them as appropriate for the interface in use.
> I think this combination would bring speed back, in a huge way. You could get a page on the screen in a fraction of the time of the web. The memory consumption could be tiny. It would be incredibly accessible, by default. You could make great-looking default stylesheets and share alternative user stylesheets. With dramatically limited scope, you could port it to all kinds of devices.
Yes, these are the greater benefits.
> What could aggregation look like? If web pages were more like documents than applications, we wouldn’t need RSS - websites would have an index that points to documents and a ‘reader’ could aggregate actual webpages by default.
Yes, that will work, although you may want metadata fields to be available. (An implementation might then allow end users to specify SQL to query them, or other things are possible.)
> We could link between the webs by using something like dat’s well-known file, or using the Accept header to create a browser that can accept HTML but prefers lightweight pages.
The documentation for dat's well-known file does not work (it just tries to redirect).
Using the Accept header is possible, but has its own issues. You might need to list one hundred different file formats to indicate all of them, you might want to download an arbitrary file regardless of Accept headers, etc.
> Application web 2.0
There are problems with the containers, web applications, etc, which is that they do not have the powers of UNIX, TRON, etc.
About separation of application web vs document web, I think that they should not be joined too closely together nor separately too far apart.
I have may own design which is VM3 (the name might or might not change in future), and we can then see what we will come up with. (It could be used with static document views only, executable codes with command-line or GUI or other interfaces, etc. The design is meant to improve portability and security, as well as user controls and capabilities.)