Of course I hear plenty of people complaining that apps on top of hypertext is a fundamental mistake and so we can't expect it to ever really work, but the way you put it really made it click for me. The problem isn't that we haven't solved the puzzle, it's that the pieces don't actually fit together. Thank you.
Honestly, that's basically happened, but just a generation later.
Most modern frameworks are doing very similar things under the hood. Svelte, SolidJS, and modern Vue (i.e. with Vapor mode) all basically do the same thing: they turn your template into an HTML string with holes in it, and then update those holes with dynamic data using signals to track when that data changes. Preact and Vue without Vapor mode also use signals to track reactivity, but use VDOM to rerender an entire component whenever a signal changes. And Angular is a bit different, but still generally moving over to using signals as the core reactivity mechanism.
The largest differences between these frameworks at this point is how they do templating (i.e. JSX vs HTML vs SFCs, and syntax differences), and the size of their ecosystems. Beyond that, they are all increasingly similar, and converging on very similar ideas.
The black sheep of the family here is React, which is doing basically the same thing it always did, although I notice even there that there are increasing numbers of libraries and tools that are basically "use signals with React".
I'd argue the signal-based concept fits really well with the DOM and browsers in general. Signals ensure that the internal application data stays consistent, but you can have DOM elements (or components) with their own state, like text boxes or accordions. And because with many of these frameworks, templates directly return DOM nodes rather than VDOM objects, you're generally working directly with browser APIs when necessary.
This is all specific to frontend stuff, i.e. how do you build an application that mostly lives on the client. Generally, the answer to that seems to be signals* and your choice of templating system — unless you're using React, then the answer is pure render functions and additional mechanisms to attach state to that.
Where there's more exploration right now is how you build an application that spans multiple computers. This is generally hard — I don't think anyone has demonstrated an obvious way of doing this yet. In the old days, we had fully client-side applications, but syncing files was always done manually. Then we had applications that lived on external servers, but this meant that you needed a network round-trip to do anything meaningful. Then we moved the rendering back to the client which meant that the client could make some of its own decisions, and reduced the number of round trips needed, but this bloats the client, and makes it useless if the network goes down.
The challenge right now, then, is trying to figure out how to share code between client and server in such a way that
(a) the client doesn't have to do more than it needs to do (no need to contain the logic for rendering an entire application just to handle some fancy tabs).
(b) the client can still do lots of things that are useful (optimistic updates, frontend validation, etc
(c) both client and server can be modelled as a single application as opposed to two different ones, potentially with different tooling, languages, and teams
(d) the volatility of the network is always accounted for and does not break the application
This is where most of the innovation in frontend frameworks is going towards. React has their RSCs (React Server Components, components that are written using React but render only on the server), and other frameworks are developing their own approaches. You also see it more widely with the growth of local-first software, which approaches the problem from a different angle (can we get rid of the server altogether?) but is essentially trying to solve the same core issues.And importantly here, I don't think anyone has solved these issues yet, even in older models of software. The reason this development is ongoing has nothing to do with the web platform (which is an incredible resource in many ways: a sandboxed mini-OS with incredibly powerful primitives), it's because it's a very old problem and we still need to figure out what to do about it.
* Or some other reactivity primitive or eventing system, but mostly signals.
We are running code on servers and clients, communicating between the two (crossing the network boundary), while our code often runs on millions of distributed hostile clients that we don't control.
It's inherently complex, and inherently hostile.
From my view, RSC's are the first solution to acknowledge these complexities and redesign the paradigms closer to first principles. That comes with a tougher mental model, because the problem-space is inherently complex. Every prior or parallel solution attempts to paper over that complexity with an over-simplified abstraction.
HTMX (and rails, php, etc.) leans too heavily on the server, client-only-libraries give you no accessibility to the server, and traditional JS SSR frameworks attempt to treat the server as just another client. Astro works because it drives you towards building largely static sites (leaning on build-time and server-side routing aggressively).
RSCs balance most of these incentives, gives you the power to access each of them at build-time and at run-time (at the page level or even the component level!). It makes each environment fully powerful (server, client, and both). And manages to solve streaming (suspense and complex serialization) and diffing (navigating client-side and maintaining state or UI).
But people would rather lean on lazy tropes as if they only exist to sell server-cycles or to further frontend complexity. No! They're just the first solution to accept that complexity and give developers the power to wield them. Long-term, I think people will come to learn their mental model and understand why they exist. As some react core team members have said, this is kind of the way we should have always built websites-once you return to first principles, you end up with something that looks similar to RSCs[0]. I think others will solve these problems with simpler mental models in the future, but it's a damn good start and doesn't deserve the vitriol it gets.
Meanwhile, sync engines seem to actually solve these problems - the distributed data syncing and the client-side needs like optimistic updates, while also letting you avoid the complexity. And, you can keep your server-first rendering.
To me it's a choice between lose-lose (complex, worse UX) and win-win (simpler, better UX) and the only reason I think anyone really likes RSC is because there is so much money behind it, and so little relatively in sync engines. That said, I don't blame people for not even mentioning them as they are newer. I've been working with one for the last year and it's an absolute delight, and probably the first genuine leap forward in frontend dev in the last decade, since React.
This isn't true, because RSCs let you slide back into classic react with a simple 'use client' (or lazy for pure client). So anywhere in the tree, you have that choice. If you want to do so at the root of a page (or component) you can, without necessarily forcing all pages to do so.
> which means its server-first model leads you to slow feeling websites, or lots of glue code to compensate
Again, I don't think this is true - what makes you say it's slow feeling? Personally, I feel it's the opposite. My websites (and apps) are faster than before, with less code. Because server component data fetching solves the waterfall problem and co-locating data retrieval closer to your APIs or data stores means faster round-trips. And for slower fetches, you can use suspense and serialize promises over the wire to prefetch. Then unwrapping those promises on the client, showing loading states in the meantime as jsx and data stream from the server.
When you do want to do client-side data fetching, you still can. RSCs are also compatible with "no server"-i.e. running your "server" code at build-time.
> To me it's a choice between lose-lose (complex, worse UX) and win-win (simpler, better UX)
You say it's worse UX but that does not ring true to my experience, nor does it really make sense as RSCs are additive, not prescriptive. The DX has some downsides because it requires a more complex model to understand and adds overhead to bundling and development, but it gives you back DX gains as well. It does not lead to worse UX unless you explicitly hold it wrong (true of any web technology).
I like RSCs because they unlock UX and DX (genuinely) not possible before. I have nothing to gain from holding this opinion, I'm busy building my business and various webapps.
It's worth noting that RSCs are an entire architecture, not just server components. They are server components, client components, boundary serialization and typing, server actions, suspense, and more. And these play very nicely with the newer async client features like transitions, useOptimistic, activity, and so on.
> Meanwhile, sync engines seem to actually solve these problems
Sync engines solve a different set of problems and come with their own nits and complexities. To say they avoid complexity is shallow because syncing is inherently complex and anyone who's worked with them has experienced those pains, modern engines or not. The newer react features for async client work help to solve many of the UX problems relating to scheduling rendering and coordinating transitions.
I'm familiar with your work and I really respect what you've built. I notice you use zero (sync engine), but I could go ahead and point to this zero example as something that has some poor UX that could be solved with the new client features like transitions: https://ztunes.rocicorp.dev
These are not RSC exclusive features, but they display how sync engines don't solve all the UX problems you're espousing they do without coordinating work at the framework level. Happy to connect and walk you through what a better UX for this functionality would look like.
People bemoan the lack of native development, but the consuming public (and the devs trying to serve them) really just want to be able to do things consistently across phones and laptops and other computing devices and the web is the most consistent thing available, and it is the most battle-tested thing available.
The difficulty is finding designers who understand web fundamentals.