Better late than never I guess.
[1] https://github.com/WebAssembly/interface-types/commit/f8ba0d...
[2] https://wingolog.org/archives/2023/10/19/requiem-for-a-strin...
1. Support non-Web API's
2. Support limited cross language interop
WebIDL is the union of JS and Web API's, and while expressive, has many concepts that conflict with those goals. Component interfaces take more of an intersection approach that isn't as expressive, but is much more portable.I personally have always cared about DOM access, but the Wasm CG has been really busy with higher priority things. Writing this post was sort of a way to say that at least some people haven't forgotten about this, and still plan on working on this.
I mean, surely it does not come to a surprise to anyone that either of these is a huge deal, let alone both. It seems clear that non-Web runtimes have had a huge influence on the development priorities of WebAssembly—not inherently a bad thing but in this case it came at the expense of the actual Web.
> WebIDL is the union of JS and Web API's, and while expressive, has many concepts that conflict with those goals.
Yes, another part of the problem, unrelated to the WIT story, seems to have been the abandonment of the idea that <script> could be something other than JavaScript and that the APIs should try to accomodate that, which had endured for a good while based on pure idealism. That sure would have come useful here when other languages became relevant again.
(Now with the amputation of XSLT as the final straw, it is truly difficult to feel any sort of idealism from the browser side, even if in reality some of the developers likely retain it. Thank you for caring and persisting in this instance.)
Apple perceives web-based applications as chipping away at their app store (which makes them money), and so they cripple their Safari browser and then force all mobile browsers on iOS to use their browser engine, no exceptions, so that developers are forced to make a native app where Apple can then charge the developers (and thus the users) for a cut of any sales made through the app.
It's one reason the DOJ started suing Apple, but I fear that may have been sidelined due to politics.
https://www.justice.gov/archives/opa/media/1344546/dl?inline
I'm building a new Wasm GC-based language and I'm trying to make as small as binaries as possible to target use cases like a module-per-UI-component, and strings are the biggest hinderance to that. Both for the code size and the slow JS interop.
The difference in perf without glue is crazy. But not surprising at all. This is one of the things I almost always warn people about, because it's such a glaring foot gun when trying to do cool stuff with WASM.
The thing with components that might be addressed (maybe I missed it) is how we'd avoid introducing new complexity with them. Looking through the various examples of implementing them with different languages, I get a little spooked by how messy I can see this becoming. Given that these are early days and there's no clearly defined standard, I guess it's fair that things aren't tightened up yet.
The go example (https://component-model.bytecodealliance.org/language-suppor...) is kind of insane once you generate the files. For the consumer the experience should be better, but as a component developer, I'd hope the tooling and outputs were eventually far easier to reason about. And this is a happy path, without any kind of DOM glue or interaction with Web APIs. How complex will that get?
I suppose I could sum up the concern as shifting complexity rather than eliminating it.
Then there is the single array of memory that makes modern memory allocators not really work, resulting in every WASM compiler scrounging up something that mashes the assumptions of the source language into a single array.
Example subsets:
- (mainly textual) information sharing
- media sharing
- application sharing with, small standard interface like WASI 2 or better yet including some graphics
- complex application sharing with networking
Smaller subsets of the giant web API would make for a better security situation and most importantly make it feasible for small groups to build out "browser" alternatives for information sharing, media or application sharing.
This is likely to not be pursued though because the extreme size of the web API (and CSS etc.) is one of the main things that protects browser monopolies.
Even further, create a standard webassembly registry and maybe allow people to easily combine components without necessarily implementing full subsets.
Do webassembly components track all of their dependencies? Will they assume some giant monolithic API like the DOM will be available?
What you're doing is essentially creating a distributed operating system definition (which is what the web essentially is). It can be designed in such a way that people can create clients for it without implementing massive APIs themselves.
The chance for that ever materializing is most like zero though.
What would really change perception is not just better benchmarks, but making the boring path easy: compile with the normal toolchain, import a Web API naturally, and not have to become a part-time binding engineer to build an ordinary web app.
https://component-model.bytecodealliance.org/
It includes high level concepts, practical code samples and more that introduce the really powerful parts of WebAssembly.
With regards to the JS ecosystem specifically there are 3 projects to know:
https://github.com/bytecodealliance/StarlingMonkey
https://github.com/bytecodealliance/ComponentizeJS
https://github.com/bytecodealliance/jco
The most mature tool chain right now is Rust, but there is good support for most things with LLVM underneath (C/C++ via clang). Golang, python and support for other languages is getting better and better (tinygo and big go) and there’s even more to come.
One of the goals of WebAssembly is to melt right into your local $TOOLCHAIN as a compilation target, and we are getting closer every week.
Won't bother trying going through differences/how-this-is-not-that but I'll say this: This time, it's slightly better, just like every time before.
I'd even go so far as to say this iteration is much better than what came before, and the speed of adoption by multiple language toolchains, platforms, operating systems, browsers proves that.
(to elaborate: WASM works just fine without the component model, it's not "the future of WebAssembly", just an option built on top of it, and of questionable value tbh)
It's perfectly good content, sir!
> (to elaborate: WASM works just fine without the component model, it's not "the future of WebAssembly", just an option built on top of it, and of questionable value tbh)
WebAssembly absolutely works fine without the component model, and I'd argue it's much better with the component model.
Here's my simple pitch.
world before component model:
> be me
> build a webassembly core module
> give it to someone
> they ask what imports it needs
> they ask how to run it
> they ask how to provide high level types to it
world after component model: > be me
> write an IDL (WIT[0]) interface which specifies what the component should do
> write the webassembly component
WebAssembly gives us an incredible tool -- a new compilation target that is secure, performant and extensible. We could crudely liken this to RISCV. In $CURRENT_YEAR it doesn't make sense to stop at the RISCV layer and then let everyone create their own standards and chaos in 50 directions on what the 1/2/3 step higher abstractions should be.Emscripten carried the torch (and still does great work of course) in building this layer that people could build on top of, but it didn't go far enough. Tools like wasm_bindgen in Rust work great but lack cross-platform usage.
The Component Model is absolutely the future of WebAssembly. Maybe not the future of WebAssembly core, but if you want to be productive and do increasingly interesting things with WebAssembly, the Component Model is the standards-backed, community-driven, cross-platform, ambitious way to do things with WebAssembly.
To be incredibly blunt, the failure of other attempts to centralize community, effort, and bring people along to build a shared thing that is just low cost enough that everyone can build on top is impressive. Nothing against other efforts, but I just can't find any similar efforts that others have standardized on in any meaningful way.
We're talking about a (partially already here) world where every popular programming language just outputs to WebAssembly natively and computers on every popular architecture/platform have an easy time running those binaries and libraries? If that's not the future, paint me a different one/show me movement in that direction -- genuinely would love to see what I'm missing!
And in all of this, the Component Model is optional -- if you don't like it, don't use it. If WebAssembly core works for you, you are absolutely free to build! wasm32-unknown-unknown is right there, waiting for you to target it (in Rust at least).
The 45% overhead reduction in the Dodrio experiment by skipping the JS glue is massive. But I'm curious about the memory management implications of the WebAssembly Component Model when interacting directly with Web APIs like the DOM.
If a Wasm Component bypasses JS entirely to manipulate the DOM, how does the garbage collection boundary work? Does the Component Model rely on the recently added Wasm GC proposal to keep DOM references alive, or does it still implicitly trigger the JS engine's garbage collector under the hood?
Really excited to see this standardize so we can finally treat Wasm as a true first-class citizen.
I'm not exactly sure how this works when binding it to GC languages.
[1] https://component-model.bytecodealliance.org/design/wit.html...
Maybe they should have spent some time wondering how previous component models work, e.g. COM, CORBA, RMI, .NET Remoting,....
SendMessage itself is frustratingly dumb. You have excessively bit fiddly or obnoxiously slow as your options. I think for data you absolutely know you’re sending over a port there should be an arena allocator so you can do single copy sends, versus whatever we have now (3 copy? Four?). It’s enough to frustrate use of worker threads for offloading things from the event loop. It’s an IPC wall, not a WASM wall.
Instead of sending bytes you should transfer a page of memory, or several.
(there was also some more recent discussion in here: https://news.ycombinator.com/item?id=47295837)
E.g. it feels like a lot of over-engineering just to get 2x faster string marshalling, and this is only important for exactly one use case: for creating a 1:1 mapping of the DOM API to WASM. Most other web APIs are by far not as 'granular' and string heavy as the DOM.
E.g. if I mainly work with web APIs like WebGL2, WebGPU or WebAudio I seriously doubt that the component model approach will cause a 2x speedup, the time spent in the JS shim is already negligible compared to the time spent inside the API implementations, and I don't see how the component model can help with the actually serious problems (like WebGPU mapping GPU buffers into separate ArrayBuffer objects which need to be copied in and out of the WASM heap).
It would be nice to see some benchmarks for WebGL2 and WebGPU with tens-of-thousands of draw calls, I seriously doubt there will be any significant speedup.
And besides performance, I think there are developer experience improvements we could get with native wasm component support (problems 1-3). TBH, I think developer experience is one of the most important things to improve for wasm right now. It's just so hard to get started or integrate with existing code. Once you've learned the tricks, you're fine. But we really shouldn't be requiring everyone to become an expert to benefit from wasm.
What are examples of such applications? Honest question - I'm curious to learn more about issues such applications have in production.
> But we really shouldn't be requiring everyone to become an expert to benefit from wasm.
If the toolchain does it for them, they don't need to be experts, no more than people need to be DWARF experts to debug native applications.
I agree tools could be a lot better here! But as I think you know, my position is that we can move faster and get better results on the tools side.
That is a useful benefit, not the only benefit. I think the biggest benefit is not needing glue, which means languages don't need to agree on any common set of JS glue, they can just directly talk DOM.
What you don't get much is people doing standard SPA DOM manipulation apps in WASM (e.g. the TodoMVC that they benchmarked) because the slowdown is large. By fixing that performance issue you enable new usecases.
Integrating the component model into browsers just for faster string marshalling is 'using cannons to shoot sparrows' as the German saying goes.
If there would be a general fast-path for creating short-lived string objects from ArrayBuffer slices, the entire web ecosystem would benefit, not just WASM code.
Being able to complete on efficiency with native apps is an incredible example of purposeful vision driving a significant standard, exactly the kind of thing I want for the future of the web and an example of why we need more stewards like Mozilla.
Performance is already as good as it gets for "raw" WASM, the proposed component model integration will only help when trying to use the DOM API from WASM. But I think there must be less complex solutions to accelerate this specific use case.
especially now with coding agents, the DX they think they are bringing to the table is completely irrelevant.
"Java the language is almost irrelevant. It's the design of the Java Virtual Machine. And I've seen compilers for ML, compilers for Scheme, compilers for Ada, and they all work. Not many people use them, but it doesn't matter: they all work." --James Gosling
Then Microsoft happened. MS realized that "Write Once, Run Anywhere" kills their OS monopoly, so they polluted Java with brilliant Embrace, Extend, Extinguish strategy (Sun vs. Microsoft revealed the emails where the stated goal was "Kill cross-platform Java" by growing the "polluted" Java market.):
Embrace: Microsoft licensed Java from Sun Microsystems and built the MSJVM. It was the fastest JVM for some time.
Extend: They created a programmer tool for Java with proprietary Windows-specific "extensions" and also removed standard features like RMI and JNI.
Extinguish: Developers using MS tools (90% of devlopers at the time) produced "Write Once, Run Only on Windows" software and killed it and pivoted to C# and .NET
The solution was to sandbox the whole VM but this breaks all the existing code designed for partial sandboxing (e.g. most of the standard library). WASM uses this approach from the start.
Don't be surprised if Google adds something similar to Chrome for WASM.
Before Java 6 in 2006 JVM wasn't a good target for dynamically typed languages. In Java 6 they added some support but it wasn't very efficient. In 2008 they started serious work on fixing this, and that work went into Java 7 in 2011.
The .NET CLR on the other hand was designed from the start to be a good target for all types of language and was superior to JVM at this from the start through at least Java 7.
Dynamic typing was even accepted thing for general purpose programming. It was just a curiosity. It's strange how mainstream programmers are always stuck into fads and superstition.
Most "apps" in phones could be just something you install trough browser and cache in a sandbox if we developed good standards.
https://github.com/WebAssembly/component-model/blob/main/des...
For end users, they should just see their language's native concurrency primitives (if any). So if you're running Go, it'll be go routines. JS, would use promises. Rust, would have Futures.
As I see it, WASM is used to augment the JS/WebAPI ecosystem. For example, when you need to do heavy bit manipulation, complex numerical processing. The round-trip JS->WASM->JS is an overhead. So the WASM modules should perform a substantial amount of processing to offset that inefficiency.
I frequently find that V8 optimisations yield sufficient performance without needing to delve into WASM.
IMHO if you want to write WebApps in Rust, you're holding it wrong.
> There are multiple reasons for this, but the core issue is that WebAssembly is a second-class language on the web
It would be nice if WebAssembly would really succeed, but I have to be honest: I gave up thinking that it ever will. Too many things are unsolved here. HTML, CSS and JavaScript were a success story. WebAssembly is not; it is a niche thing and getting out of that niche is now super-hard.
Possibly disabled now as they announced VBScript would be disabled in 2019.
Quite often it comes with a mandatory service worker which has to be communicated to in a specific fashion, then some specific headers need to be available server side, etc. I'm not saying it's not required but ... I imagine most Web developer are used to requiring a library, calling its function, getting the result. Until it reaches that stage then JavaScript fallbacks will be preferred until there is absolutely no alternative but the WASM binary.
PS: this might sound like such a low bar... but the alternative is giving up entirely on either, starting a container with a REST API then calling it with a client. That's very easy and convenient when you've done it once and you decouple. So maybe I'm finicky but when very popular alternatives exist I believe the tipping point won't happen unless it becomes radically easier than what exists.
The DOM is not a static interface, it changes both across browsers based on implemention status and also based on features enabled on a per page load basis.
The multi browser ecosystem also mainly works because of polyfills.
It's not clear how to polyfill random methods on a WIT interface or how to detect at runtime which methods exist.
OTOH the JS bridge layer we use today means you can load JS side polyfills and get wasm that's portable across browsers with no modifications. There's more to the ecosystem than just performance.
WRT WebAssembly Components though, I do wish they'd have gone with a different name, as its definition becomes cloudy when Web Components exist, which have a very different purpose. Group naming for open source is unfortunately, very hard. Everyone has different usages of words and understanding of the wider terms being used, so this kind of overlap happens often.
I'd be curious if this will get better with LLM overseers of specs, who have wider view of the overall ecosystem.
Not that I necessarily think it's unwarranted. While I appreciate the simplicity of the current approach to interop because it gives you free reign and is easy to grasp, I think anyone who has spent some time rawdogging JS-WebAssembly integration has considered inventing their own WASM IDL analog. If that can be specified as part of the standard it can also be made quicker.
What can you even modify there, when all the structure is flattened into a single layer
From the code sample, it looks like this proposal also lets you load WASM code synchronously. If so, that would address one issue I've run into when trying to replace JS code with WASM: the ability to load and run code synchronously, during page load. Currently WASM code can only be loaded async.
I believe that the need for JavaScript glue code has significantly hindered the appeal and interest in WASM, as well as the resulting ecosystem growth.
I would love something like this for native applications; I'm so tired having to wear C's skin every time I want to do bind together code written in different languages.
[1] https://exercism.org/profiles/mikestaas/solutions
[2] https://github.com/mikestaas/wasmfizzbuzz/blob/main/fizzbuzz...
(though i do like the open code nature of the internet even if a lot of the javascript source code is unreadable and/or obfuscated)
Which is to say, dancing around the core issue, which is direct DOM access as well as anything else JS has privileged access to.
Let's face it. People benefit from complexity and poorly performing apps. Not just for the web, look at video game engines too.
When hardware gets faster and cheaper, people tend to say "meh" to quality and performance concerns. That part gets easier, but that's the slippery slope that introduces poor quality and complexity - especially when pressured top down by companies to go faster.
Basically the need for, forget about reward in, quality is completely removed.
So congratulations Internet. We have most web apps powered by a language spec dreamt up in a weekend. With patches on top of patches and abstraction on top of abstraction since.
It's been great job security of course... gatekeeping and all...but I don't know. I kinda hope AI does come and just replaces it all or something.
Man I wanted to use WebAssembly more. For Dart I made a Photoshop PSD to JPG converter that was super fast too. Much faster than any JavaScript image convert and resize was. Bummer.
If Wasm modules can be loaded like this:
<wasm src="/module.wasm" start="main" data="..." />
...and if Wasm could access the DOM like this: import dom, json, re, html, urllib.parse
from datetime import datetime
params = urllib.parse.urlencode({"category": "news", "limit": 10})
data = json.loads(await (await fetch(f"https://api.example.com/data?{params}")).string())
container = js.document.getElementById("app")
container.innerHTML = "".join(f"..." for i in data["items"])
...then developers would have jumped in right away.> "We've developed Foo 2.0. It does not have feature parity with Foo 1.0, but it's more architecturally elegant in ways that are meaningless to the public. It took ten thousand man hours and was done instead of much needed upgrades to Foo 1.0."
*one year later*
> "Why is no one adopting Foo 2.0? Yes, it doesn't have feature parity with Foo 1.0, but it's frankly irresponsible of the public not to understand that 2.0 is a higher number than 1.0."
*ten years later*
> "Remember Foo? What a mess. Thank god Bar came along."
*eleven years later*
> "Announcing Bar 2.0... "
There is a massive, pathetic drop in quality from the system designs of the past compared to the garbage we are seeing nowadays
You can tell they are fully embracing slop and stupid design
Tehy are trading efficiency and elegance for bloat, and it is absolute trash
They have stopped engineering, they abstraction junkies
OGs don't want to be associated with any of that trash, and it shows
Hear me out: Web APIs need to devolve into... APIs. DOM needs to devolve into a UI API. We have PWA's, File System APIs, USB APIs, peripheral device APIs, all the things a native webui client would be doing.
Yes, it is "back to square one" , we already have websites masquerading as native apps via electron and tauri.
On the other end you have a handful of tech companies dictating how our computing experience should be because they control browsers.
WASM should be the bytecode format for executing untrusted code from the network and running a UI-capable application with controlled access to the system, that runs in a secure sandbox. This is java applets but better.
You have the same issue on mobile where regular websites create apps, so they can be persistent and have access to things they shouldn't, and be all naughty. This happens because somehow we treat "native" differently than "web". it should all be restricted like web apps are, sandboxed tightly, but given access to resources like any electron app would (but not your entire file system, or entire anything, ever!)
There shouldn't be any "installing" anything, perhaps bookmarking an app instead. I'm not saying let's do away with proper native apps, non-GUI apps still have a place, as do system services and extensions which are a whole other class of system applications. But your banking app isn't one of those, neither is a social media app, or a photo editor, a game, uber,etc.. all these can run in WASM, and WASM in turn gets native access to APIs similar to but not exactly web apis. I learned in that other comment thread that replicating the web DOM APIs for WASM is a foolish effort since it is all built with JS in mind. However, an HTML5 compatible DOM layer, that is distinct from DOM-manipulation layers, and has a low-level styling layer (not a best like CSS, but something CSS can be compiled into, or that WASM styling code would natively compile into).
You will still need something to run the WASM like browsers do today, but here is the biggest value of my proposal: Unlike browsers, this would be heavily standardized, and as far as how your WASM renders and manipulates the DOM, that would be extremely consistent across WASM browsers, mainly because it would be so low-level there won't be any opinionated subjective interpretations between host apps. Unlike JS, there won't be any script runtime, unlike CSS, there is no styling engine. The responsibility of a beast like V8 is divided so that DOM, styling,security and API interactions are strongly defined in bytecode/ABI by the standard for the WASM host/browser, the actual UI of the app (tabs, themes, extensions,bookmarks, history,etc..) would not be different between WASM host/browsers, and of course the app's logic would be defined in whatever language, which will be compiled into WASM bytecode compatible with the aforementioned standard. This should result in consistent UI, fully networked apps with controlled resource access, that run ephemerally (caching as desired), storing persistent data as needed (no software updates per-app).
It is a lot of effort but consider the state of computing, between mobile apps, things like flatpak, electron, tauri, "vendoring", PWA apps, bloated chat apps like slack, teams, element, discord, web framework mess,etc.. is this the chaos we want to leave the next generation?
It might take a long time, but isn't it good to "build trees under whose shade you'll never sit"?
I've created a proposal to add a fine-grained JIT interface: https://github.com/webassembly/jit-interface
It allows generating new code one function at a time and a robust way to control what the new code can access within the generating module.
If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.
And now that we're getting close to have the right design principles and mitigations in place and 0-days in JS engines are getting expensive and rare... we're set on ripping it all out and replacing it with a new and even riskier execution paradigm.
I'm not mad, it's kind of beautiful.
I think you may be confusing Javascript the language, with browser APIs. Javascript itself is not insecure and hasn't been for a very long time, it's typically the things it interfaces with that cause the security holes. Quite a lot of people still seem to confuse Javascript with the rest of the stuff around it, like DOM, browser APIs, etc.
Browsers are millions of lines of code, the amount of UAFs, overflows, etc so far is not the bottleneck.
Isnt this what an OS is supposed to do? Mobile operating systems have done a pretty good job of this compared to the desktop OS.
I think web apps are dead anyway and the browser is heading towards being a legacy software. The future is small ephemeral UIs generated on the fly by LLMs that have access to datasources. WASM is too late.
JavaScript is the right abstraction for running untrusted apps in a browser.
WebAssembly is the wrong abstraction for running untrusted apps in a browser.
Browser engines evolve independently of one another, and the same web app must be able to run in many versions of the same browser and also in different browsers. Dynamic typing is ideal for this. JavaScript has dynamic typing.
Browser engines deal in objects. Each part of the web page is an object. JavaScript is object oriented.
WebAssembly is statically typed and its most fundamental abstraction is linear memory. It's a poor fit for the web.
Sure, modern WebAssembly has GC'd objects, but that breaks WebAssembly's main feature: the ability to have native compilers target it.
I think WebAssembly is doomed to be a second-class citizen on the web indefinitely.
> WebAssembly is the wrong abstraction for running untrusted apps in a browser
WebAssembly is a better fit for a platform running untrusted apps than JS. WebAssembly has a sandbox and was designed for untrusted code. It's almost impossible to statically reason about JS code, and so browsers need a ton of error prone dynamic security infrastructure to protect themselves from guest JS code.
> Browser engines evolve independently of one another, and the same web app must be able to run in many versions of the same browser and also in different browsers. Dynamic typing is ideal for this. JavaScript has dynamic typing.
There are dynamic languages, like JS/Python that can compile to wasm. Also I don't see how dynamic typing is required to have API evolution and compt. Plenty of platforms have static typed languages and evolve their API's in backwards compatible ways.
> Browser engines deal in objects. Each part of the web page is an object. JavaScript is object oriented
The first major language for WebAssembly was C++, which is object oriented.
To be fair, there are a lot of challenges to making WebAssembly first class on the Web. I just don't think these issues get to the heart of the problem.
Where I think the argument goes wrong is in treating "most websites don't use WASM" as evidence that WASM is a bad fit for the web. Most websites also don't use WebGL, WebAudio, or SharedArrayBuffer. The web isn't one thing. There's a huge population of sites that are essentially documents with some interactivity, and JS is obviously correct for those. Then there's a smaller but economically significant set of applications (Figma, Google Earth, Photoshop, game engines) where WASM is already the only viable path because JS can't get close on compute performance.
The component model proposal isn't trying to replace JS for the document-web. It's trying to lower the cost of the glue layer for that second category of application, where today you end up maintaining a parallel JS shim that does nothing but shuttle data across the boundary. Whether the component model is the right design for that is a fair question. But "JS is the right abstraction" and "WASM is the wrong abstraction" aren't really in tension, because they're serving different parts of the same platform.
The analogy I'd reach for is GPU compute. Nobody argues that shaders should replace CPU code for most application logic, but that doesn't make the GPU a "dud" or a second-class citizen. It means the platform has two execution models optimized for different workloads, and the interesting engineering problem is making the boundary between them less painful.
So does JavaScript.
> It's almost impossible to statically reason about JS code, and so browsers need a ton of error prone dynamic security infrastructure to protect themselves from guest JS code.
They have that infrastructure because JS has access to the browser's API.
If you tried to redesign all of the web APIs in a way that exposes them to WebAssembly, you'd have an even harder time than exposing those APIs to JS, because:
- You'd still have all of the security troubles. The security troubles come from having to expose API that can be called adversarially and can pass you adversarial data.
- You'd also have the impedence mismatch that the browser is reasoning in terms of objects in a DOM, and WebAssembly is a bunch of integers.
> There are dynamic languages, like JS/Python that can compile to wasm.
If you compile them to linear memory wasm instead of just running directly in JS then you lose the ability to do coordinated garbage collection with the DOM.
If you compile them to GC wasm instead of running directly in JS then you're just adding unnecessary overheads for no upside.
> Also I don't see how dynamic typing is required to have API evolution and compt.
Because for example if a browser changes the type of something that happens to be unused, or removes something that happens to be unused, it only breaks actual users at time of use, not potential users at time of load.
> Plenty of platforms have static typed languages and evolve their API's in backwards compatible ways.
We're talking about the browser, which is a particular platform. Not all platforms are the same.
The largest comparable platform is OSes based on C ABI, which rely on a "kind" of dynamic typing (stringly typed, basically - function names in a global namespace plus argument passing ABIs that allow you to mismatch function signature and get away with it.
> The first major language for WebAssembly was C++, which is object oriented.
But the object orientation is lost once you compile to wasm. Wasm's object model when you compile C++ to it is an array of bytes.
> To be fair, there are a lot of challenges to making WebAssembly first class on the Web. I just don't think these issues get to the heart of the problem.
Then what's your excuse for why wasm, despite years of investment, is a dud on the web?
If it gets stuck as a second-class citizen like you're predicting, it sounds a lot more like it's due to inflexibility to consider alternatives than anything objectively better about JavaScript.
(I'm not a fan of the WASM component model either, but your generalized points are mostly just wrong)
My points are validated by the reality that most of the web is JavaScript, to the point that you'd have a hard time observing degradation of experience if you disabled the wasm engine.