I don't want to speak to this example too deeply because I don't know it (I see they're doing all sorts of stuff with audio so maybe they do need WebAssembly) but modern JavaScript VMs are very, very fast. 99% of webapps are absolutely fine using JavaScript.
The far more important part of making snappy UIs is the Web Worker aspect. To my mind it's one of the key reasons native apps feel so much better than web ones: it's trivial to move an operation off the main thread, do an expensive calculation and then trivial to bring it back to the main thread again. Unfortuately the API for doing so on the web is extremely clunky, passing messages back and forth between a page context and a worker context. I'd recommend anyone thinking about this stuff to take a look at Comlink:
https://www.npmjs.com/package/comlink
it lets you wrap a lot of this complication up in simple promises, though you still have to think hard about what code lives inside a worker and what does not. In my ideal world all the code would live in the same place and we'd be freely exchanging stuff like you can in Swift. "Don't do it on the main thread" feels like a mantra taught to every new native developer as soon as it can be, the same discipline simply doesn't exist on the web. But neither do the required higher level language features/APIs.
I’d also caution that using Web Workers isn’t always so obvious either (and the same applies to server side threading eg on Node). There’s significant overhead (runtime) in spinning up a worker, and in every communication between threads—enough to negate their benefit for many use cases. Both WASM and workers have roughly the same perf downsides, and both should generally be justified by real measurement of their impact.
AssemblyScript and Rust/WASM implementation of these relatively simple but computationally heavy algorithms didn’t result in any meaningful improvements of their JS counterparts. In the end moving computations that took 10ms or more (some up to 700ms) off main thread and using tooling to simplify the WebWorker API was definitely more gainful, simpler and better for code hygiene, and in most cases as fast or faster than the WASM implementations.
The Figma team would say differently. For 99% of CRUD app use cases you are absolutely correct. But WASM has enabled functionality on the web we could have only dreamed of 5 years ago.
> WASM has enabled functionality on the web we could have only dreamed of 5 years ago
Like what? I ask the question genuinely. WebAssembly makes it dramatically easier to do a lot of performance-sensitive things but to my mind it doesn't actually enable a whole lot of new functionality. I don't mean to write off making things easier, it's a huge deal. But if anyone asserts that we absolutely must use WebAssembly for Project X I'd really want to dive into exactly why. "It's fast" is not a good enough answer.
It's a bit like the Slack of graphic software.
The key differentiation, as the parent says, is using Web Workers to make sure you're not doing work on the main thread.
Like what?
This is one of the advantages to using WebAssembly: you can use a language like Rust that makes multithreaded code easier to write and faster to run.
Out of the box with JavaScript if you want to share an object with a worker you need to send a message (which internally serializes / deserializes the object) or manually manage serializing / deserializing bytes to a SharedArrayBuffer.
With Rust/Wasm + Web Workers you can use Rust's synchronization primitives and just pass a pointer to the worker. You pay no cost for serialization / deserialization.
> if anyone tells you they need to use WebAssembly to make the UI snappy I'd advise you interrogate that assertion thoroughly.
As you pointed out the JavaScript VM is incredibly optimized but where Wasm shines is consistency. JavaScript requires some care to not produce garbage every frame, otherwise you may unpredictably trigger a lengthy garbage collection. With most Wasm languages you can easily avoid allocating any JavaScript garbage.
The lower-level Wasm languages also tend to be easier to reason about performance. When assessing Rust code performance you typically want to look for Big-O and excessive memory allocations. If you do find a performance bottleneck typically you know where to try to improve. With JavaScript you'll sometimes hit situations where you fall off the VM's golden path, or where one JavaScript VM performs fine and another does not.
as a user, much less than 99% of apps are "absolutely fine" to use.
I just finished a UI that requires a few HTTP requests to the API and a bit of dynamic behavior (but not a ton), and it's done inline with ES6, with no transpilers or minifiers, using ArrowJS. It does the job, and it rips.
The frontend does all the rendering for the editor, which we want to stay within the frame budget. That's why we offload all data synchronization work (applying CRDT deltas, encrypting/decrypting data to/from websockets, IndexedDB caching, search, parsing JSON and so on) to the "backend" thread.
I think for apps like this, splitting the UI and data part up (kind of like a frontend and a backend in a classic client/server web app) is very useful to prevent blocking the main thread. In our case this "backend" is actually a SharedWorker, which has the added benefit that it's very easy to keep all state in sync with multiple open tabs/windows, and we just multiplex all incoming websocket events to all connected "frontends".
I agree passing messages is a bit clunky, but we’ve taken a similar approach to comlink so we can simply "await" a function call on the backend from the frontend. Besides setting that up once (or just using something like comlink), it's not much work. Okay except I found debugging on Chrome a bit clunky, as I think Firefox allows you to see all SharedWorker console output from within the main thread console for example (with Chrome you can open the inspector for shared workers separely). Plus we also have all kinds of other neat features with modern browsers these days, like the SharedWorker, and zero-copy communication using Transferable objects.
Get prepared to be blown away by Makepad [0]. I have no affiliation with them, but just watched their most recent conference presentation [1]. The slides were made with Makepad itself and included, embedded, a full-blown IDE, a synthesizer app, a Mandelbrot to zoom in endlessly, and more. All running at 120fps. The presentation is for the most part live-coding with this setup.
What they want to do is bring coders and designers closer together, and while some code is in Rust they developed a DSL for the GUI parts that is close to how Figma works. These GUI's can run anywhere.
And I couldn't help thinking "Why would people have complicated stacks to create Web 2.0 apps for the Google Web, when they have this?", in other words an opportunity to break out of the browser straitjacket.
Btw. WebAssembly/WebGL isn't the only way in which Makepad is available. And while running well in the browser for a time, there were issues to be solved here (addressed in the presentation). And tbh this isn't a real answer to your assertion. Greg Johnston, creator of Leptos, has made a video with performance comparisons [2].
Edit: Adding a link to the synthesizer app I just found [3].
[0] https://github.com/makepad/makepad
[1] https://www.youtube.com/watch?v=rC4FCS-oMpg
[2] https://www.youtube.com/watch?v=4KtotxNAwME
[3] https://makepad.nl/makepad/examples/ironfish/src/index.html
Perhaps because they still believe in the promise of Web 1.0, where their app is a graceful-enhancement over an initial server-rendered document, that can be easily worked with at a DOM level by any HTML scraper, easily baked to a PDF and printed, easily re-laid-out for better readability just by changing the browser font size, easily text-to-speech'ed (including ARIA roles, alt text, etc), easily re-styled with a user-agent stylesheet, easily intermediated by browser extensions, and so forth.
I've yet to see a WASM-driven web application that's any less opaque to these technologies than a Flash or ActiveX applet would be.
It's nearly ten years old now but I remember being absolutely blown away by React Canvas, a UI toolkit that leveraged React but instead of rendering DOM nodes it rendered to a <canvas/> tag. Beautiful, 60fps stuff. All written in JS. Unfortunately all the demos seem to have disappeared since but here's a blog post about it:
https://engineering.flipboard.com/2015/02/mobile-web
Point is, both Makepad and React Canvas have something in common: they ditched the DOM. There are both advantages and disadvantages to doing so but the relevant point is that you don't need to use WebAssembly to do it.
https://www.youtube.com/watch?v=4KtotxNAwME
The benchmarks being displayed have leptos beating react in nearly every category.
There are three reasons (for the vast majority of apps) that a UI feels sluggish:
1. The network! Requesting data from a server is slow, by far the slowest aspect of any app. As a start, prefetch and cache, use a CDN, try edge platforms that move data and compute closer to the user. However, if you can explore Local First (http://localfirstweb.dev), for a web app it is the way we should be looking to build in future.
2. They are doing work on the UI thread that takes longer than 16ms. This is where Web Workers are perfect, the DX around them isn't perfect, as another comment suggested Comlink helps, but there is a lot of opportunity here to build abstractions that help devs.
3. Excessive animations that delay user interaction - there is so much bad UX where animations have been added that only make things slower. Good animations have a purpose, showing where something came from or is going, and never get in the way.
Finally, we are well into diminishing returns with front end frameworks optimising the way they do templating and update the DOM. The key thing there now is DX, that is how you pick a framework, benchmarks are almost always useless.
Good animations also help reduce perceived delays
Many websites will slowly slide a newsletter sign up form up from the bottom. We already know that it comes from the underworld, so the animation absolutely pointless there.
But the thing I haven't found a solution for is, the case where you want to use web workers inside a library that other people will be importing into their own project, and you don't know what bundler they'll use (or transpiler, minifier, etc). I can think of hairy ways to do it, that involve pre-building the worker JS and storing that text in your library, piping it into a file blob at runtime, or the like. But does anyone know a clean way of handling this?
I just did that today, to be able to bundle the worker script with the client that controls it in a single file. It's convenient but feels hacky, and I wonder about its impact on page performance.
There's a similar trick for bundling a base64-encoded WASM binary with the host JS that controls it in a single file. That saves effort for the consumer of the script, so they don't need to bundle or copy the binary into their static assets folder, for the script to load.
I think the common (best?) practice is to let the consumer handle the static file (like worker script, WASM binary), and then for the client script to provide an option to set the URL path where the static file is served.
Layer 0: Strategically separate core logic while assuming as little about the environment as possible. Function Y generates something, function X handles the result somehow. Maybe there’s a postMessage somewhere between, or maybe not—you don’t care. Maybe Y is slow, but that doesn’t mean it must assume it runs in a worker. Maybe X serializes output in some way, but it doesn’t need to assume that DOM exists yet. However Y and X are wired up later is none of their concern.
Layer 0.5: Document intended or just practical ways to invoke those APIs. Y is slow, call it from a worker. X formats stuff, so if you’re in a browser you’ll want to hook it up to DOM somehow.
Layer 1: Provide glue functions to wire your core logic up in different environments. Worker message handlers? React components? These things could require more specific environments to be called in, and they would use Layer 0 APIs—but, crucially, your layer 0 won’t fail at its core task if there’s no DOM or postMessage. Maybe your user doesn’t want Y to run in a worker, or manages own web worker pool, etc.
Layer 2: Provide last-mile facilities and helpers. This outer layer is technically outside of your actual library implementation. Bundler configuration templates for esbuild? Webpack? Example projects? Template repositories? Single-file bundle that spawns a worker for simplest use cases or demos? Anything’s great here—though note that if you support too many options there’s a good chance some of them will become stale, which can hurt adoption, and you don’t want to spend too much time on this layer as it’s probably the least important and the most flaky as specs, environments, build tools and trends evolve. (That’s also the reason why commingling this stuff, with all of its runtime/environment concerns, and your actual library is probably a very bad idea. If your library always spawns a worker at runtime, someone may certainly curse.)
Such a design should maximise your library’s utility. Somebody doesn’t want Y to run in a worker for some crazy reason? They are always free to wire up core functions in whatever way they want. Another user has a complex project that manages own worker pool? They’ll probably eject after layer 1. Ensuring as much as possible is at lower layers, strategically separated, means you will have easier time iterating on higher layers to support different environment scenarios or bundlers, and you (or your users!) can add support for any new runtime configurations that appear in future without touching the core parts.
JS can be largely fast enough for a lot of use cases: if I had some WebAssembly to a project I need a good reason to do so.
JS can be faster in enough cases where it's worthwhile to test.
WASM is mostly being used for code that has already been written and is now being integrated into a web site. I wouldn't suggest jumping right to WASM simply for performance.
How much faster?
I have to ask because people frequently make claims about perform based upon unmeasured assumptions that are wrong more often than not. Worse than that, many of these imagined claims about performance tend to be off by one or more orders of magnitude.
Of course for some tasks you'll still need more than that, which very well may have been true for the OP, but benchmarking is good etc
I wrote a blog about it recently.
https://logankeenan.com/posts/client-side-server-with-rust-a...
Is that true? Last I heard wasm was significantly slower than vanilla JavaScript in synthetic benchmarks.
Has that changed recently?