Either way, I'm receptive to the "LiveView" model, but until such a time a server can deliver both HTML and server driven native UI for mobile interchangeably, I prefer the RPC approach.
I don't want to develop:
1. Both an API for mobile and LiveView for web.
OR
2. Create my own server driven UI paradigm for mobile.
I find maintaining one architectural pattern simpler. I do want to support users with the best technology fit, but I don't see why I have to make this tradeoff. The platform holders are (and always have been) jerking us around.
I want to see a LiveView that can deliver both HTML and equivalent native UI markup. This is needed to sell the vision end to end - the world is not just web.
I feel like this would enable a more sensible choice:
Offline (first)? Use RPC with sync.
Online-only? Use the LiveView paradigm and it'll work with native or web.
I really think this is such a key point and is the main blocker to this sort of architecture for cases where you need to support all the platforms.
That said there are a lot of web applications that just need some form of UI and don't need full multi-platform support and LiveView type systems involve many fewer pieces to get going with. I'm thinking more here about company internal tooling for whatever purpose, rather than web services provided to customers which are more likely to need mobile apps.
Also, let's say you've already created your backend in Elixir (e.g. using Phoenix for GraphQL or JSON), and have built your mobile app against it. To implement your web frontend, is it easier and better to roll a whole JS app from scratch or just interface with the APIs that already exist locally to produce a LiveView app? Obviously there are some app characteristics that dictate this, but for a lot of things LiveView is still going to be easier. But then, maybe I'm biased because I dislike frontend programming!
Websockets don't really make sense to use as a broad bidirectional communication tool; they're overly complicated for situations you can just open a plain TCP socket to communicate over, and making them a requirement for clients that have no need of them is a poor ask. So as soon as you're supporting a client other than a browser, while still supporting a browser, you already are going to likely want to support two APIs.
Good design will allow you to share your model for bidirectional communication regardless of the connection type, and you can then bake in any updating JS into the Websocket connection via LiveView or similar, while exposing a more normalized socket endpoint for other servers, mobile clients, etc. Even if you end up having to support multiple connection assumptions at a later point (i.e., a Websocket connection for your web client that contains JS, a Websocket connection for a web client being developed by a different team that should just return data, and a plain socket connection for non-browser based connections), the lift should be pretty small.
Now, I haven't dug into Apple iOS dev yet, I know they have some artificial limitations on what an app can receive, but thats an Apple limitation, not a technical one. I'm pretty sure Apple allows an iOS app to receive data over the Internet and render that data within the app? I just need to figure out what format it can be sent it for the app to render it.
Is that really that hard a problem?
In the other/current model you're building total 3 components: web frontend, mobile frontend and the API and in the liveview model you're again building total 3 components: LiveView for Web front/backend, and mobile frontend and API for mobile backend.
If done properly, the LiveView backend can share a lot of code with the API backend. Moreover, an added advantage is that the Web codebase and Mobile codebase can be different depending on the requirements of the two platforms, except where, as noted above, it makes sense to abstract it out and share. So, IMvHO better software engineering in general.
Let's say I have absolutely zero business purpose for an API - like zero. Neither general purpose or single purpose. I now have to make one to serve a native/mobile app. I have to pick language, I have to design API surface, pick an RPC framework, REST or GraphQL, I have to adopt a different testing strategy for it, etc.
When all I really wanted to say was: "Hey server... can you serve a slightly different template connected to all the logic I've already written that a mobile app can understand as long as the app contains a component library?"
As for the opposite model where you are API first, at least you have the language, API contract / technology, testing strategy in place. And it will work regardless of what your clients are! The clients then implement their own UI, ideally on top of some shared, headless client (maybe a native module?) and you have maximized code re-use. Only the UI tech is different (and that's a maybe, since you could use flutter or RN), and is tailored to each device nicely.
So I do think the second is VERY desirable because it's single paradigm and you can go pretty far to reduce duplication of effort, almost entirely.
There is no such possibility in the server-driven paradigm, and I'd like to see it because it would give me all the re-usability benefits of the second approach, with a huge advantage of making the clients leaner.
It's a personal thing, but I personally don't want to smash my web-app backend into the same service as a general purpose API. It requires a lot of discipline to keep the concepts separate, and it fails more often than I've seen it work. I do appreciate your comment here of "better engineering" because I so very much wish for that. I wish that the average Django/Rails/Phoenix/whatever framework would not turn into a swampy piece of junk when you keep both responsibilities in the same codebase, but they often do.
The options we have are pretty decent - I just think something like LiveView could be better. Its only promise is a (admittedly slicker) take on SSR for the web. That doesn't move the needle far enough to be revolutionary to me, and doesn't solve a problem that most people have. The problem we have is that the server paradigm is way, way behind where it should be when it comes to serving different types of clients.
I have a working solution for something like this that is based on Chrome
https://github.com/mumba-org/mumba
The applications developers publish native, Swift applications, where Swift have the full access to the Webkit/Blink api the same way Javascript does and even more with the patterns from the Chromium renderer (for instance, access to lifetime events, OnDocumentLoaded, OnVisible, OnFirstPaint, etc..).
The offline first comes from the fact that every application runs first as a daemon which provide a gRPC api to the world of the services it offers, where the web-UI application process might also consume this same api from its daemon -process manager or from other applications.
Note that the daemon process who is always running give access to "push messages", so even if no UI app are running things can still work, like syncing into the DB, or even launching a new UI application given some event.. This service process also is pinged back by the web-UI process for every event, so the daemon process can also act when theres a need (when everything is loaded on the UI, do this..)
Also about the article, note that with this solution you will code in pure Swift (not WASM) just like the article is pointing out the web applications that can be built without any javascript.
Other languages like Rust, C++, Python can be added giving the Swift applications are talking to the runtime through a C api. (And i could use some help from other developers who also want this in other languages)
If you want to ship your applications with this, you will get a much better environment than Electron, using much less resources, as things are being shared to every application, and the cool part is the applications can consume one another services (for instance a Mailer application) forming a network, and depending one-another (the magic here is that they are all centrally managed by a core process that is always running)
You can start this way, but if you have a new requirement, you might reach a technological crossroad and ask yourself "why do I have to add a new paradigm? If the thing I'm already using could just do X, I'd save a shit load of time and/or money."
You can always add, yes. But why do I have to?
The world has treated web and native as separate for far too long. We're really just squabbling over UI toolkits, so why can't we come up with something that just says "Fuck it, we're supporting both native and web as first class in every server framework because that's how it should be. And we're also going to let you use any language in the browser because that's how it should be"
(Admittedly, browsers on iOS are gimped, but that's part of the Apple tax).
I'd like to see the technology work with HTML or server driven native UI, and let developers decide which UI toolkit is best for their app market.
Since it would theoretically support both, you could incrementally transition as well.
After building my own framework [1] last year, I realized just how disconnected JavaScript developers—yes, the "big boys" are especially off in CS la-la land—were from web development fundamentals. HTTP distilled down to its absolute essence is brilliant, but for some reason gets absolutely ignored in the JS world. When you couple that with websockets for incremental updates (similar to what Chris McCord is doing in Phoenix), it's insanely powerful. It can be reasonably lightweight, too, if you're only shipping the JS you need to render the page.
That rendering some HTML, CSS, and interactive JS in a browser has been turned into what it has is staggering. Though, not surprising, when you realize a lot of the momentum in JavaScript the last decade or so was perpetuated by venture capital (and the inevitable fast and cheap nature of that world) being pumped into inexperienced teams.
The funny thing is, I don't know that the resulting product is really much better.
MDN is richer, more exact, more complete. But being minimal is better when you already unfamiliar with basics.
TLDR : MDN pair well with W3School, imo both are fine as they are.
You might say virtual DOMs are exactly what you're talking about when you talk about the design choices of popular frameworks.
It fundamentally comes down to the idea of writing functional components. I declare exactly what I want my UI to look like as if render() is called every frame. But I don't care _how_ it gets there.
The complexity comes from taking this declarative UI and making it performant.
I don't doubt that 90% of web pages could get away with not using React or Vue or Angular. But as someone who loathes writing complex applications with pure HTML and JavaScript, they do serve a purpose.
I've written web apps that are basically just one large render() function with pure JS/HTML and it gets the job done. But at some point you do need to optimise and then it all falls apart.
It can still work well enough when developers don’t go crazy with divs, and when they use shouldComponentUpdate. But nobody uses that stuff; and modern pages are bloated like crazy.
For my money I think the problem is cultural. Theres a community and culture around frontend engineering now which seems to entirely disregard performance to the throne of closing tickets as fast as possible. Lots of frontend devs I meet at conferences and meet ups have no idea how the browser works, and no real desire to learn. The result is disasters like the new reddit homepage - which needs more horsepower to render smoothly than AAA video games did a few years ago.
The GP is right. There’s an insane amount of performance being left on the table. I don’t think it’s a technical problem. It’s a cultural problem.
My focus for v1 is on developer ergonomics and solidifying an API (this will be frozen after v1 with the only changes being additions if absolutely necessary—a component you write today will look identical to one you write in 10 years).
All future major versions will be solely focused on performance and security (my way of saying: I haven't stress-tested renders, but for the time being it will work well for the majority of use cases).
Edit: the lack of mention of how I do it is intentional. The last ten years saw brilliant inventions turned into mush because all of the leading project devs got into an unacknowledged dick measuring contest with one another (i.e., passive-aggressive jockeying for dominance that was ultimately a distraction from building software that was usable).
Yep. Frontend frameworks get you a slick looking responsive GUI with not much effort (you are outsourcing most of the design work). This wows the average VC. The VC funds you rather than the team with an architecturally simple frontend, and the framework flywheel gains momentum.
And it will stay working as long as you never need to update any of the packages. And then the rapidly changing, highly co-dependent nature of the FE npm ecosystem will bite you in the ass unless you were really careful with your choices - which you weren't because giant megacorps said it was 'best practice'.
Can you please elaborate what these areas of disconnect are? I looked at Joystick documentation hoping to find more of your thoughts this topic, but could not.
The core problem is the blurring of the lines between a framework and the underlying technologies of the web (HTML, CSS, and JavaScript). To me, it seemed like a lot of unnecessary complexity was being added just to achieve the common goal of rendering some HTML, styling it with CSS, and making it interactive with JavaScript (what I refer to in conversation as a "need to look smart" or "justify having a CS degree").
For example, introducing a new syntax/language like JSX (and the compiler/interpretation code) when plain HTML is simple and obvious. The drive to make everything happen on the client leading to vendor lock-in by third-party providers to fill the gaps of having a database (e.g., Firebase/Supabase). Making routing seem like some complex thing instead of a URL being typed into a browser, a server mapping it to some HTML, and then returning that HTML.
Where this really became clear to me was in the resulting developer experience. I've been teaching devs 1-on-1 since ~2015 and had a bird's eye view of what it was like using these tools across the spectrum of competency (beginner, intermediate, advanced). The one consistent theme was confusion about how X framework mapped back to HTML in the browser.
That confusion was handled in one of two ways: making messes in the codebase to solve a simple problem (e.g., lord help me with the nested routing in React) or giving up entirely. As an educator, the latter was especially bothersome because the transition from basic web dev to these JS frameworks was so jarring for some, they just didn't want to bother. That shouldn't happen.
---
I'd like to elaborate on these things a bit more. Can you send an email to ryan.glover@cheatcode.co so I can remember to follow up once I've organized the post I've hinted at above?
I'm not the OP but something I have noticed is the JavaScript wünderkind seem to want to completely reinvent web browsers but in JavaScript. They seem to not actually know HTML, CSS fundamentals, or even HTTP. So you end up with frameworks that reinvent everything a browser already gives you (for free) but entirely inside their JavaScript monstrosities.
I might just be grouchy and uncharitable but it's certainly the feeling I get looking at any of the big JavaScript frameworks. It's infuriating to see frameworks reinvent something like a button with a bunch of anchors, divs, and spans when HTML gives them a perfectly functional button with a bunch of built-in events including touch events.
I think you are probably ignoring the fact that we have basically converged back to a web monoculture again.
A lot of the current frameworks were born out of "Chrome is shitty this way. Firefox is shitty that way. Internet Explorer is shitty this other way. This framework elides over all that."
Tiny amount of glue HTML attributes, a heap of partials that are your PHP files and you’re good to go.
What is old is new again I suppose: we used to do this with PHP and jQuery once upon a time too — though LiveView and similar are far nicer of course.
I’ve been working on some personal tools with nothing more than Deno and htmx. Works quite well for my needs!
I use it with lisp-like languages (Clojure, Janet). Those have html libraries that transform datastructures into html.
So it doesn't feel like I'm writing html, while I'm pretending not to use JavaScript. It's glorious.
In short of you enter data into a form then htmx push navigate away, when you click the browser back button you get the dom as it was originally delivered from the server, without any data the user might have entered into text boxes. This is a show stopper problem and been working on work arounds, not sure any is good. Basically we have resorted to plain old full page reloads with client side redirect to resolve this.
Checkout has the following URLs, one for each step:
* page.com/checkout (optionally with ?step=0)
* page.com/checkout?step=1
* page.com/checkout?step=2
* page.com/checkout?step=3
Now you need 2 things:1) Making sure the server knows how to render all those pages independently (like if the user does a hard-refresh or if they open that URL directly, without navigating to it through a link). Note: I believe this should be the default in any website if you respect the concept of a URL (whether client or server-side rendered).
2) If there's a form input at each step, the server needs to store that info and be aware that the user has an incomplete checkout.
Now the user is at step=2 and presses the back button, or clicks on the step=1 link. In that case, the server should know the information stored in the point no. 2 above, and return an HTML form with the data pre-filled based on the last state. E.g:
<input type="text" id="name" name="name" value="Foobar">
I think there are 3 main keys:1. You need different URLs for each step. 2. Each URL should work both with HTMX (maybe using hx-push-url or the HX-Push header) and *without*. That is, any navigation to that page should also render the same HTML. 3. The server needs to be aware of the state. When a user requests page.com/checkout?step=2, the server should know if the HTML form requires pre-filled values.
This increases a bit the complexity on the server, but I believe it reduces the client-side complexity a lot more.
I used to do the same with PHP + Prototype.js back in 2006-2007 before jQuery existed, including pretty weird hacks for non-AJAX supported browsers (using JS to append <script></script> from server-side to the DOM).
Surely what is old is new again...
Oh I hope so. I marvel at React virtual DOM magic on daily basis so I'm sure it can be all done a lot more efficiently these days.
>Were you keeping stateful "widgets" on the Java side or hydrating from client >state for interactions?
IIRC it was 90% on the Java side. The default polling rate was something along the lines of 200 ms. So if you clicked a checkbox on the front end, it would perform the animation for clicking it on the front end but it wasn't considered checked until the backend was informed of it and signed off on it. In Java code if you had a listener for onClick you would write a normal Java method and it would get invoked normally with access to all your server stuff. As you can imagine, Echo2 server session objects could get pretty heavy. Reminded me a bit of mainframes and terminal clients lol :)
https://www.youtube.com/watch?v=XhNv1ikZNLs&list=PLqj39LCvnO...
The programming model (in liveview, don't know about blazor or hotwire or livewire) really lets you get better performance by doing less. A part of me sarcastically thinks wow, damn, deleting data is irreversible data transformation and therefore increases entropy, every stateless http request is inching us closer to the heat death of the universe.
https://www.reddit.com/r/lisp/comments/s1itqi/the_common_lis...
https://www.reddit.com/r/lisp/comments/sd9wf1/clog_builder_c...
"I CLOGed my frontend"
"I Reacted my frontend"?
"I AngularJSed my frontend"?
"I VueJSed my frontend"?
I just don't see it.
The javascript is minimal (like to manage selection in text field or to handle copy/paste). Everything else is in elixir. Tailwind is used for CSS.
Having the full client state available on the backend is really incredible. It is so much easier, no more ajax/graphql... You just have all your backend data "under your fingers".
I can only recommend giving it a try.
Also, if your app is offline first, phoenix channels are great for sync. It is not live view, but it is easier to use than ajax calls.
Got the exact same feeling writing my latest app with Laravel Livewire. The ability to use backend state seamlessly on the frontend without explicitly passing state back and forth reduces complexity so much it's like writing 1 application instead of 2. I was able to build rich, fully interactive apps with minimal JS -- I'll never go back to React again.
- crash on error - functions are just functions
The first one is counter intuitive to many programmer. Basically you use pattern matching aggressively, like this:
def full_name(%{first_name: <<a::utf8, _::binary>>, last_name: <<b::utf8, _::binary>>}) do
"#{a} #{b}"
end
This code will work only if a map/struct with the correct non null field are passed. The pattern match utf8 string of at least 1 valid utf8 character. Any other argument will crash.Crashing in elixir is the way to handle unexpected things. For example, you will do:
{:ok, myobject} = DB.get....
And if the DB fails (something unexpected), it will crash, the process will be restarted and the live view connected remounted and it will start again from a clean state.Of course, error that are expected should be handled.
The second thing with elixir, is that functions are decoupled from data. You can move them around easily. For example, the above function will work on any struct with the correct fields and a plain map. But you can pattern match on specific struct too, which makes the code more like OOP, but it is more rarely done.
The key idea is to restrict code paths to something you expect, and always be explicit about what to expect.
//
The thing about rendering and processing things server-side and relying on very little, if any, JavaScript is that it makes it possible to send very small amounts of data to the client but still handle demanding tasks.
There's something to be said for making a website usable for even the most anemic of client hardware.
JSF: main idea is to abstract away boundary between server and client. Turns out -- you really want to know where that thing is running -- on the server or on the client. So it turned into a fight against that main idea of JSF.
GWT: main idea is to forget JavaScript/DOM and write pure Java. Turns out -- you really need to know JavaScript and your DOM to write GWT. So it turned into a fight against that main idea of GWT.
JSF was trying to make handy components for JSPs. And made them unusable in the process.
I admit I'm still to scarred by JSF to be keen to try out these new frameworks and find out if they've actually solved these problems, but I'd love to read an evaluation of the new server-side frameworks from someone who experienced the problems with the old ones.
I pray for WASM to eventually replace JS. NodeJS ecosystem with all the experimental features, job ads in update notes, shitload of dependencies, one-function packages, legacy JS is a special kind of pain that I avoid whenever I can.
TypeScript is not native. Is it then also a BE lang?
What is happening here is a big rift in the programmers community between the "I'm productive in it so it is great" and the "I prefer to use strong typing and proofs to ensure it does not break at runtime".
And the second group has managed to compile more-and-more of it's langs to JS and WASM.
Taking "I'm productive in it so it is great" to an extreme, we end up seeing precisely the kind of domain crossover TFA hints at. Somehow, since Rust is cool, it should be the future of web / in-browser[1]. Somehow, since JS is popular and easy to use, it should be on the james web space telescope[2]. Somehow, since browsers are well accepted, node.js is a reasonable benchmark for embedded systems performance [3]
GTFO with that. That is how we get language bloat and the many overlapping frameworks that creep into the language and make it unrecognizable. I'm looking at you, Tokio and D3.
1. https://leerob.io/blog/rust
2. https://old.reddit.com/r/programming/comments/bglvey/the_9bn...
NodeJS relies on V8, which is perhaps the world's most highly optimised interpreter (/JIT compiler/runner), and a spectacular piece of software. The v8.dev blog alone is fantastic reading for anyone who works on compilers, as I do.
I don't like the 'dev' tendency in the programming community, to just hack stuff together and say "well, it works!", but I'm also not impressed by people trashing any high-level languages just because it's a meme they heard.
V8 is phenomenally performant and is a hard benchmark to beat for anyone and anything, even at the compiled end of the spectrum. (Yes, it's beaten by C/C++/Rust/Go, but it's not to be sneered at, at all.) I'm struggling to equal it with the highly optimised Python JIT-assisted interpreter I'm writing for my work - even after relaxing the ABI and writing sophisticated instruction-set-specific optimisations. (And I'm writing it in Rust, just to score at least one point off the cargo-cult bingo card.)
But why so many frameworks for the exact same output? It usually comes down to code management and making the code writing closer to the mental models of the problem they intend to solve.
I find it very unproductive to try to force JS to fit in every domain possible because there are already very mature languages who were designed with that domain in mind. That said, JavaScript has the power of being able to run everywhere which gives it unique advantage in actually learning and running the code written in JS.
I don't have a conclusion for this. I know JS and feels great to be able to run it everywhere, especially quick and dirty solutions are almost as nice as working with PHP. However, it easily frustrates me when trying to do something better than quick and dirty and start appreciating the domain specific toolsets. I guess the moral of the story is that JS is great but let's not try to solve every problem with it.
I know rust, but I haven't used async apart from their tutorial, so I'm really curious about what are your thoughts about it.
So it's not a rift as much as the fact that one camp is just plain wrong.
The side that insists on using strongly typed languages for a weekend throwaway project?
Or the side that insists on using dynamic languages for 2000 file projects?
I always see "rifts" as a lack of perspective. One solution is obviously the only best way because they only see their narrow sets of use cases.
Every single bug that has ever existed in static languages passed the type-checker. There are benefits in types to be sure but cornering the market on "growing and lasting" is definitely not one of them. If anything I'd say types let you refactor quicker - dynamic languages let you launch faster.
I should also note that I don't mean dependent type languages which I haven't actually shipped anything professionally but I'm extremely curious about.
We say they help us with solving a complex problem but I'd say they are the ones that make the problem more complicated than it needs to be.
Yes they have their uses, but those are few and far between. The rest is just devs solving how to do what they want whilst keeping the compiler happy with whatever IBaseAbstract virtual template method thingimabob method the compiler needs.
That doesn't mean Javascript will go away but it does mean it now has to compete on merit rather than relying on it's status as the only thing that you can actually use in a browser. Many projects have already shifted to using Typescript for example. And from there to other transpiled languages is not that big of a leap. We use Kotlin-js for example. It's great. Other people use clojure, vue, elm or other languages that basically aren't javascript but that just happen to transpile to it.
Long term WASM is a more efficient compiler target. Smaller, faster, easier to deal with for compiler builders, etc. The only remaining reasons to target browser js with your compiler, is that it still has better bindings to the browser APIs. This is something that is being worked on and of course that is more than offset by being able to compile existing libraries to WASM.
WASM has a few other things as well that are still being worked on (threading, memory management/garbage collections, etc.). But long term, anything that javascript can do in a browser would be something you could do from a WASM compiled program as well. Once that works well enough, a lot more languages will start targeting browser applications just because they can. And a lot of developers that don't necessarily care about downgrading to Javascript might have a go at doing some applications that run in a browser. I predict a little renaissance in frameworks and tooling for this.
And of course with WebGL and Canvas, using the DOM and CSS is entirely optional. For example Figma has a nice UI that looks like it would be hard to match with anything DOM based. They have a bit of react on the side for the bits that don't really matter but most of the app is C++. People also run games, virtual machines, etc. in browsers. There might still be valid reasons to use DOM + CSS like we have been for the last two decades. But it won't be the only choice for doing UIs. And being limited by it like we have been is increasingly no longer necessary either.
you're a decade late to this observation. CoffeeScript came out around 2009. The world has largely been transpiling to ES5 for at least eight years now.
The fact of the matter is, JavaScript post-ES6 is a solid language. Many of those people that were using ClojureScript/CoffeeScript/etc. have moved back to JS.
2007 JavaScript has almost nothing in common with 2022 JavaScript. I think this needs shouting from the moon, based on the comments I keep seeing on HN. We used to have to use Firebug. On IE we were totally blind. JavaScript was designed to crash silently while still allowing a web page to function. Today, JS is a critical part of most pages and has sophisticated tooling.
> There might still be valid reasons to use DOM + CSS like we have been for the last two decades.
If you care at all about accessibility and general browser standards, then you're stuck with the DOM. You have to reinvent the world each time you decide to make the canvas your entire UI. It's hard enough getting browser history and URLs working and creating the illusion of "pages" in a SPA. I can't imagine the pain in the ass this would be in a WASM blob.
> But it won't be the only choice for doing UIs.
It was never the only choice. Not even in 1996. Macromedia Flash, Java applets, etc. For niche applications, the canvas is great. But the majority of the world will continue using the DOM.
And more obfuscatable.
I think the opposite, the rift has been healing. It seemed that it was pretty big in the 2010's (Python/Ruby/Perl vs Java/C#/C++) but these days we have TypeScript, Rust, Kotlin; Java and C# are getting better too (type inference), Python and Ruby are getting typechecking.
> these days we have TypeScript, Rust, Kotlin; Java and C# are getting better too (type inference), Python and Ruby are getting typechecking
everyone caves to the types.
i think the strongest holdout is probably the LISP/Clojure corner. they have optional typing for along time, but it is not used that much afaik
many have voiced their reasons for liking dynamic typing. i only have one: faster reload cycles while developing. the other pros of dynamic typing are all over shadowed by their inherent downsides.
It's talking about server-side rendering. While the title might be a little provocative, its certainly talking about rendering UI (the front end) using whatever language you used on the back end, even if it was JS.
The second group is incentivized to do so, because they still have value to bring.
JavaScript 2022 has sucked all the air out of the room in terms of dynamic dispatch, untyped PLs. Compared to Python, Ruby, and Lua, it has better startup time and throughput, and at the end of seven years of pilfering every good idea any of those languages ever had* (and a few bad ones**), it is by now equally or more expressive by every metric. It's also the most popular PL in history, with a variety of maintained and opinionated libraries for every conceivable task.
Given that JavaScript is already the ultimate flavor Objective-Blub, it makes no sense to write a JS-targeting transpiler for your favorite minor flavor of Blub. And since you're already targeting a browser, Lua's embedability and Python's C FFI are no advantages. The only remaining language in this family with interesting features that JavaScript hasn't (and may never) co-opt is Racket, with its powerful hygienic macros.
To a lesser extent, this is also becoming true for the statically typed languages, where TypeScript*** offers a better developer experience than, and has made significant inroads against the faster but similarly expressive SML/OCaml/F#/Bucklescript/Reason/ReScript family (to say nothing of Java!).
On the other hand, the camp of "I prefer to use strong typing and proofs to ensure it does not break at runtime" still has concrete advantages to bring to the world of FE dev, namely:
• Non-GC memory management strategies suitable for latency sensitive applications (C, C++, Rust)
• Static member lookup & method dispatch (C++, Rust)
• SIMD (Rust)
• Higher-kinded types (Haskell, PureScript)
* block scope, destructuring assignment, default parameters, safe navigation, modules, iterators, generators, async/await, template strings, regexp named matches and lookarounds, BigInts** classes
*** While TypeScript is technically a compile-to-JS language, it is such a thin layer on top of JS that there is essentially no impedance mismatch between TS in an IDE and interactive debugging, which makes it unique within that category.
exactly.
This has not been my experience with C++ (segfaults etc.) or Java (runtime introspection and exceptions). In ye olde languages compilation seems to be mostly (more than 50%) motivated by performance rather than correctness. Compiled languages have been doing steadily more compile-time work and improving ergonomics lately.
There are some programmers who don't really care what tools they use as long as they feel productive. They shy away from things like types because it feels like it slows them down.
Then there are programmers who care a lot about tools because it feels like the wrong tools hold them back and cause undue stress in the long-term.
Hot take: The first type are less mature programmers :)
Right. The group that came up with the concept of a REPL and live in Emacs doesn't care about tools.
* smart front end, dumb back end. This is the SPA model * dumb front end, smart back end. This is the old school model * smart front end, smart back end. This is the new school model
If you can get away with the SPA model, by all means do it. Liveview is not going to challenge that. What it is doing, is to give new live to the old school model so you can avoid the complexity of having both a smart front end and a smart back end for a large percentage of applications. A smart/smart system is an asymmetric distributed system, a beast to handle for gurus and novices and everyone in between.
Wt
It uses C++ to program a back-end application kind of like you would with with Qt, and then it does all the required Javascript/HTML to render your app in the browser. It is kind of like Qt for the web (painting with a very, very broad brush).
I have also tried wasm with various Rust frameworks (Seed and Yew).
However, for my latest project (triviarex.com) I ended up abandoning those in favor of React Javascript, however with a non-traditional architecture.
The downside of these frameworks for me was tooling and turn around time and integration. React has great tooling, and it is easy to do live coding. In addition, there are a lot of pre-built components for Javascript frameworks and a ton of documentation.
While there can be live coding with the backend, I guess because of my background, I like to use strongly typed languages in the backend to help catch logical errors earlier, and that requires a compilation step.
So this is the architecture pattern that I am using.
My backend is written in Rust in Actix, and each session is represented by an Actor. The React front-end establishes a web sockets connection to the actor, and sends commands to the actor, which are then parsed by the backend using Serde JSON, and handled using pattern matching. All the state and state transitions are handled on the backend which then sends the frontend a Javascript serialization of a Rust enum that describes what the state is, and what data is needed to render. The front-end basically has a series a if-else statements that match against the state and renders it. Most logical processing lives on the backend, and most of the buttons simply just send a web socket message to the backend.
For me, this is the best of both worlds. I get the strong typing and correctness of Rust in the backend to manage the complexities of state management, and I get the flexibility and live coding of Javascript and React on the front-end to quickly and interactively develop the UI. Many times, I will be testing and see that the formatting or placement looks off, and I just quickly change the html/css/javascript and have it instantly appear.
It's amazing, really, how people feel comfortable condescending to others who explain how we did things before the current technologies existed as if we could never grasp how the new technologies improve things. Yes, this is nice, but it's evolutionary, not revolutionary.
What makes me most skeptical about Blazor though, is that it's shifting a huge burden to the client just to make developers happy. Even if the runtime is stripped down and compiled to wasm, it seems kinda wild to send a whole runtime just so you can run C# code, especially when there's alternatives like Rust that require no runtime.
What's funnier too, is that it's been possible to write F# on the frontend for a while now, compiling down to javascript (like clojurescript). It makes me wonder why this approach was never done with C# too, even if there are pitfalls in compiling to javascript.
Assumption or knowledge?
How do you know this?
I honestly don't see this happening. Beyond the open source arguments, blazor server-side mode is uniquely compelling specifically for organizations like Microsoft where there are thousands (tens of thousands?) of internal business systems that need some way to interact with, but don't necessarily need to serve 4k video traffic to the entire planet.
As a hobbyist Python dev who doesn't want to deal with frontend bs more than absolutely needed, I found my perfect stack - fastapi, svelte and tailwindcss.
Less need for an API-layer and related contracts.
You can still use Tailwind CSS. Works quite nicely.
- 2x developers
- 2x CI/CD stacks
- 2x build tools
- 2x model classes
- 2x instrumentation code
I could keep going but it's a pretty significant improvement especially for small teams or solo developers.
I had started down the path of fastapi, svelte and tailwindcss, but when I figured out that htmx let me use server-rendered templates, getting rid of the api and the packaging toolchain felt more ergonomic.
And if you're doing server-rendered things instead of APIs, django has a few more batteries included. (But I really like fastapi for APIs.)
I am surprised that there are no browsers that can support other languages. My ideal architecture is to have a browser where you can select your front-end language interpreter, as in a Chromium + V8 + CPython + Whatever front-end processor you might want (Brython[0] achieves this, but transpiling to JavaScript).
What doesn't make sense to me is that JavaScript has genuinely been the only language for the front-end, and it has been a monopoly for many years. Of course, there are other great languages like TypeScript, but these end up anyway transpiled to JavaScript, which to me feels like mounting your skyscraper over dunes. Not to hate on JavaScript, but JavaScript has grown too quirky for my tastes and that's why I've been away a lot from front-end development.
There are efforts to fix it, with the new ECMA standards, but I don't feel it's going anywhere unless breaking changes are introduced to modernize the language. The fact that you have to "patch" your scripts with 'strict mode' on the top of the file speaks a lot of being defensive with programming.
WASM is a solution to this, except you're not supposed to write WASM yourself. I want web development to be more straightforward, like the old days, where you didn't have to "compile" or "package" anything, and you just did your thing, and that worked.
--
In a period where most languages are experiencing a "rebirth" as a well thought out modern language, Dart feels like the "before" waiting for an "after".
I'd be deeply disappointed if we had progressed from Js to Dart, and it's why I'm not a fan of Flutter
(Java => Kotlin, Obj C => Swift, Js => Ts, Erlang => Elixr, etc.)
Without the web browser engine, no matter which HTML, CSS and JS you write, the fact is it cannot run without it.
What would be great to have is choosing which programming language runs in your browser. It'd be great for me to move away from JS engines like V8 or SpiderMonkey, and be able to run CPython (directly, not transpiled to JS) as my interpreted language. The stack could be HTML, CSS and Python, for example. WASM came to solve part of this, fortunately.
We are missing an excellent opportunity for more powerful web technologies if we don't embrace WASM.
In other words, it's about creating a SPA-like experience with little or no custom-written JavaScript.
When webapp logic happens on the client side, that results in slow applications (n.b. for normal users on low-end hardware, not developers on high-end hardware) due to CPU costs.
When webapp logic happens on the server side, that results in slow applications (n.b. for users on unreliable residential, mobile cellular, or rural connections, not those living with fiber) due to network latency.
Doesn't Liveview (and related techs) simply make the slowness a stronger function of internet quality (as opposed to available processing power), instead of actually making the application less dependent on either of those factors for performance?
Its not CPU. Its bad software written by people who are poorly trained and have no leadership.
Do you really need 10mb of JavaScript and 10 seconds of load time to dynamically put a few lines of text on the screen? Yes, absolutely you do, because people don't know how to do it efficiently. This is a people problem, and not a technology problem. Hardware does not solve that problem. The actual technology is actually insane fast. As a counterpoint my personal app loads an OS GUI with state restoration using 2mb of JS code (unminified) in about 150ms.
Of course we do.
We just choose not to bother because delivering features efficiently is often more valuable than shaving off a few seconds of download time. Especially when it's typically once off and then cached.
Which is most applications on the web, particularly with these techniques.
This tool does play very nicely with Django's templating engine; you can just have HTMX re-render a particular template block on the server, and send down that updated block. The migration path is quite clean; you just wrap your "HTMX-updated" template block in a `hx-post` div.
Having not gone too deep on HTMX, I'm interested in folks' thoughts on where it's lacking vs. LiveView and Hotwire. One area I can see is performance; Elixir is going to be faster than Django, and so if you're trying to handle high session counts over websockets. But the impression I get is that HTMX is a bit more light-weight, so I'm wondering if there's usecases that can't be met with it vs. LiveView.
Other Django libraries that haven't quite seen as much uptake:
We have https://github.com/edelvalle/reactor, and a port of Hotwire: https://github.com/hotwire-django but both of these don't seem to have much adoption (yet!).
Slinky React: https://slinky.dev
Laminar: https://laminar.dev
Diode: https://github.com/suzaku-io/diode
ZIO: https://zio.dev
- hosting a web sockets server is expensive
- bandwith is cheap
- client cpu is free
So a lightweight, modern SPA wins in this case
This is the irony, Erlang/Elixir, despite being functional language with less and highly restricted access to stateful side effects, is really FANTASTIC at safely holding onto state and persisting it for the user, and making that model digestible for the developer.
An optimum defined both in terms of what it enables users to do with software but also how easy it is for developers to deliver it. Since simple is better than complex it feels that any architecture that does not require two distinct ecosystems might have an advantage (all else being equal).
What's this concept called? Is it simply a matter of setting a priority level on a certain task, that way the scheduler can make sure it doesn't block?
Basically: the scheduler can interrupt an erlang thread at any time, instead of a depending on threads to cooperate with the scheduler to see if they should stop. In go, for example, goroutines will only check with the scheduler at function calls, selects, and a few other things like that.
There's a really neat demo of a machine at 100 CPU% but that's still responsive because of Erlang's preemptive scheduling. There's a ton of other goodies in that video if you're interested.
It really shows off the Erlang BEAM VM.
My point is: all achievements in web dev modularity and features have been present in Visual Basic, Delphi, TurboC++, Java and Qt since the early 2000s. Fully hardware accelerated UIs to boot. Well, we decided to move it all into the browser sandbox and start from scratch. Tools like QtCreator or the VisualBasic form editor got replaced by an expensive toolchain ranging from Adobe products to 1000 npm packages.
Also as a bit of an aside and it is a bit of a moan.
Coming from someone that can write everything from scratch and doesn't need framework. All of these things are horrible to work with (I've had to work with a bit of Blazor) and they just make it incredibly difficult to actually find out what is going on (especially when stuff doesn't work as advertised) as they just obfuscate what is going on.
Every job requires you to know some horrendous framework these that has about 10 layers of Rube Goldberg madness in there for one reason or another. People will be surprised what can be achieved with `document.createElement()` a few classes and a pub / sub class.
Designed to support multiple back-ends and front-ends. The first back-end will be Nim and the first front-end will be Flutter.
Have used it in production and it works great. Author is also very responsive to bugfixes / feature requests.
The race between web C#/python and local javascript could make becoming a "full stack" dev easier at least.
(Pun semi-intended)
Modern X applications render bitmaps which get shipped over the compositor and the X server to the graphics card driver. That is not a good model for the web, because it would mean the web server has to produce rendered bitmaps, specific for the fonts, window size, and display panel led configuration over the web. The browser cannot even select text or provide a search dialog. That is almost VNC.
But originally, X applications didn't render bitmaps but submitted drawing calls. Sun used to use Postscript for that. So using that approach, it would be possible for the browser to select text, to scroll, to copy text, to search text etc. A clear improvement. But resizing the browser window would still need a complete re-transmission from the server.
But if the web server sends static html, css, and a fixed JS library, which is used to replace parts of the dom, the browser can do a lot more locally. And still the whole application logic resides on the server.
So not much different than "modern" <canvas> based apps.
The benefit here is that we can get to a place that is almost a SPA without needing to do a whole lot of JavaScript for the Frontend. It also helps that Elixir is built on top of Erlang which gives it a boon to be able to handle a lot of users on one server.
My feeling now is that in the greater scheme of things none of the technology choices really matter all that much, the main thing is if the software is written in a clear, documented and maintainable fashion. Unfortunately people will still throw that away and pointlessly rewrite it in a few years, so making the choice to contribute in that fashion is more a matter of professional pride than practical utility. Arguably it's better for your career to let go of that pride and just embrace the constant unnecessary rewrite process.
I do wonder if there are still some corners of software development which aren't like this. Perhaps in public universities or government?
Since it was "optimized for developer happiness" (https://news.learnenough.com/ruby-optimized-for-programmer-h...) it's more likely to be easier to understand, document and maintain.
A few decades of programming has taught me that cool technology inevitably turns out to be janky and annoying and half-finished.
Try aerospace, defense, critical infrastructure. They tend to move much slower and methodically about their stacks.
The conclusion I've come to is that we're making expensive sandcastles. I don't know when I start a project how big the sandcastle will need to get, or exactly what it will end up looking like. I also don't know when the tide is coming in. Some of them were pretty good, others were disasters, but they've all washed away now.
I do quite like building sandcastles though, so I don't worry too much about it.
Of course this is the case, so does that mean new things shouldn’t be made because it’s been pre-decided it would be a waste? I can’t say I agree with that, even though I agree with the premise that all PLs suck.
Maybe this stuff is just hard, and also maybe part of that is because as humans we’re pretty limited. So I think we just have to deal with it and try. I don’t love front end stuff, for example, and it kind of annoys me at times that everything gets reinvented over and over, to great fanfare no less. However at the end of the day, I realize it’s because we’re limited and one of those limitations is not being smart enough to do everything right the first time. That makes sense and isn’t like a knock on people, just the truth.
And these new frameworks and paradigms that keep getting made indicate to me that things are reaching some stability, but aren’t there yet. Like, a lot of native stuff has been literally miles ahead for years, which is why you don’t really see a ton of innovation in that spaces, aside from adding new frameworks to support new sensors. But you still see stuff getting made to support more privacy aware stuff, which is a shame because it could have been that way from the beginning.
Anyway, I think it’s fine for both to be true: PLs suck because tech is hard and people are limited, and we keep trying new stuff because it’s just both there yet.
(Note: wrote this on my phone so maybe it rambles and is incoherent or has mistakes)
If you want to really fall down a rabbit hole, Alan Kay has been espousing similar themes for years. He of course was the driving force behind Smalltalk, which did things in the 70s that still seem hard to do. Look at the examples built by middle schoolers in [1] - super impressive stuff! I personally believe OOP got a really bad rap by the bastardizations of C++ and Java, and a principled reframing of it may be what takes us to the next level of programming.
[0] https://www.edge.org/conversation/jaron_lanier-why-gordian-s...
[1] https://www.dgsiegel.net/files/refs/Kay,%20Goldberg%20-%20Pe...
https://remix.run is a prime example. It's new, and better, and simpler.
1. software is a VERY new discipline. its also one of the most malleable. I doubt we'll ever stop seeing churn in this space.
2. we don't reinvent the how we build houses each year. you can safely ignore all this new fangled web things if you want.
3. churn is also domain specific. we've more or less stopped inventing drastically new APIs at the OS/system level.
4. software for UIs have never been great. we are still learning how to build them. most of the major players in UI software still exist, windows, java, macosx (cocoa?), QT, GTK, Enlightment, and HTML/CSS/javascript.
finally software will continue to evolve as base layers add the ability for higher layers to do things differently. for example as CPU get more and more vector operations that can dramatically change how we write code.
this churn is a positive not a negative on the industry.
Its a nice language all around on backend (and i suppose mobile, but i wouldnt know) thats a pleasure to code in