That could mean we lose hackability and the ability to write extensions or even scrape the web without a BigCo webcrawler's level of infra investment. Is everything going to turn into an opaque single page app? Technically, webassembly is really cool, but I worry about where the browser is headed.
HTML and CSS standards were always at least 10 years too late. If you wanted something that looked modern you've always had to use browser-specific extensions, plugins or, at best, "beta" features.
Seriously, I know that nowadays many devs have started straight with web technologies and don't have a lot of experience besides that but if that's your case do yourself a favor and give a quick look at a proper UI toolkit like Qt. Would it blow your mind if I told you that you can create a complex treeview without having to use third-party libraries or reinventing half the wheel yourself? Crazy, I know.
If anything this "historic separation" and people who still hold onto it might be one of the reasons the modern web is such a messy patchwork of technologies, if web engineers had accepted that an open source Flash was actually the endgame and not a problem we might have saved us some time and some trouble.
To be clear I'm not saying that it's a bad idea per-se, I personally believe that the web is way too complicated as it is, I just feel like it's similar to arguing that "literally" shouldn't be used to mean "extremely" in modern English. Sure, I get your point, but I don't think it's a battle you can win.
For example I’m a huge fan of the declarative UI trend going on in the web community. Now, granted, Microsoft came out with MVVM and XAML way before React or Angular existed, but native UI libraries are so historically imperative that declarative UI wasn’t making much of a dent in the native sphere.
But even though these older native frameworks are imperative, they are much more powerful than anything like React out of the box. They’re much more comparable to component frameworks like Bootstrap, SemanticUI, AntD, etc. But even still, those libraries don’t come anywhere close to the power of Qt or UIKit in iOS. If you’ve used both, I can’t imagine you saying that it’s even close.
But it’s precisely because of the tension between the web as a document platform vs. the web as an application platform. The web has never doubled down on being an application platform, it always keeps its document roots in the background in some way. Personally, I’m torn. I love the transparency and inspectability of the web. Literally yesterday I opened up the dev tools inn twitter and I learned about its timeline architecture just from inspecting the requests. This is the biggest thing I miss when I’m working on a native app.
But WebAssembly may be the web finally doubling down on being an application platform. Whether that’s good or bad is irrelevant at this point. It’s the direction it’s headed in.
No, not any more than C#'s giant standard library blows my mind.
Keeping the web small is a strategic decision -- it may look like chaos, but it's really just that we realized that embracing 3rd-party libraries is a better architectural decision than polluting the core spec with features that will be outdated in a few years.
QT doesn't have these problems because QT doesn't have to be infinitely backwards-compatible. If QT makes a mistake, it can fix it in the next version. The web can't do that, so we have to be more careful. If anything, my biggest criticism of the web is that we move too quickly and stick too many "modern" features in. I would have been happy to drop Classes and Arrow Functions from the JS spec, and I would have been happy to drop `sticky` from the CSS spec. As an engineer, I absolutely don't want a hamburger menu as a core HTML component.
I've had people argue to me that HTML needs more 2-way data binding and element types -- they want the ability to tie a list to a JSON object or something, instead of needing to render out separate `<li>` elements. These people are missing the point.
HTML is your user-presented state. We have a few elements that break this convention, but for the most part we want your final HTML to be static and human-readable. It's not for you, it's for your users. And 2-way data bindings would get in the way of that.
For laying things out and creating complicated lists, we have third-party libraries/frameworks -- and honestly, they work fine. If you think the web is overcomplicated or has too many frameworks right now, just wait until we start stuffing a new first-party component into it every time any design trend becomes popular.
And even if we did add all of that nonsense, people would just ignore those components anyway. No manager I've ever worked with has ever said to me, "you know, pure HTML select inputs look great and I don't mind them rendering differently in different browsers." I would be reimplementing them in JS anyway just to get the styling consistent between IE and Firefox. The good HTML components I can actually use are the low-level ones like simple input fields -- because they're small and simple enough that they can be styled and incorporated into larger custom-built solutions.
Heck, I've had managers complain to me about scroll-bar styling before. But sure, they'd totally be happy with a giant, pre-built, monolithic tree-view component that's using mid-2000s styling.
For better or worse the web never did that work, HTML+CSS+JS is meant to be both a way to make static websites, but also interactive e-shops, but also mail clients, but also videogames, but also ultra-heavy single-page apps like Discord etc...
And that's the greatest power of the web, but I can't see how this state of affair won't lead, one way or the other, towards a more monolithic architecture. Keeping HTML and CSS cleanly separated and JS only for the fancy stuff (but always optional!) is noble but it's a pipe dream when the use cases are so diverse. WebAssembly, or something like it, seems to be clearly the path forward.
Overall I think we should be happy too, at least it's not a closed source blob. Remember how long it took us to get rid of bloody Flash? Actually if Steve Jobs hadn't decided to drop it from iPhones I wouldn't be surprised if it had survived in common use to this day.
This is one place where HTML does not give enough primitives for a quality library-based solution.
Maybe not.
If data binding, dynamic markup, reactivity, etc, were baked into browser we'd save a ton of kbs and CPU cycles.
QT is a third party library that wraps the underlying OS primitives.
A really fancy tree view in JS is 10KB, maybe 15KB, given desktop+mobile support, proper accessibility, and good support for theming.
And that is if you are getting really fancy. A simple one is just some DIVs and a CSS animation for opening and closing.
That is assuming you even bother with JS, since you can do a treeview in the browser with CSS alone[1].
UI toolkits exist for the web as well, and they tend to be rather small, browsers ship with a lot of UI primitives after all!
The UI portion of code for my B2B web-app with all custom UI elements and with mobile+desktop support is around 100KB unminified.
Now the libraries I pull in to do the real time database streaming and authentication to my back end? Those dwarf the UI code (they are hundreds of KB each), but the native versions of those libraries are far larger (10x)[2], than the JavaScript equivalents.
> Would it blow your mind if I told you that you can create a complex treeview without having to use third-party libraries or reinventing half the wheel yourself?
The browser comes with a lot of wheels included, very few are getting re-invented unless someone wants indulge themself.
> To be clear I'm not saying that it's a bad idea per-se, I personally believe that the web is way too complicated as it is
Get rid of the weird things that happen to make advertising possible and most websites are pretty simple. Even complex web-apps are pretty easy to understand now days, though you'll have to learn whatever data-binding technique the dev team decided to use, but that is the same case for modern desktop applications.
[1]https://codepen.io/kobusvanwykk/pen/NqXVNQ
[2]https://stackoverflow.com/questions/41750553/what-is-the-est...
I thought HTML was developed with HATEOAS in mind. So I don't see the seperation as a technical limitation but a powerful abstraction. Flash was what people wanted but HTML was what a good web needed (imagine the internet with only Flash/WebAsm from the beginning).
We have globally networked local Networks of locally networkable Networks and most of us naively think that this Global Network of Local Networks networking Local Networks actually functions as a universally internetworking Internetwork of universally internetworkable Internetworks internetworking internetworkable Internetworks internetworking internetworkable Networks[([¹]1)], because we trust the people who've labeled both IPv4 and IPv6 as the "Internet Protocol" despite it not in any way, shape or form having the ability to perform what the people who came up with the concept of internetworking (no, NOT the ARPA people.) actually meant by the term.[2]
As the simplest proof of this fact: We've almost reached 2020 and all of our 'solutions' for multi homing don't ACTUALLY work in a not-pants-on-head-dysfunctional despite the fact that we've faced that issue since, to quote Wikipedia, "In 1972, Tinker Air Force Base wanted connections to two different IMPs for redundancy. ARPANET designers realized that they couldn't support this feature because host addresses were the addresses of the IMP port number the host was connected to (borrowing from telephony)."
Our 'Inter'net at the moment still resembles just a global network, aka just plain old phone calls, with extra steps. Which explains both the mess we call the WWW, and the global dominance of large telcos.Can't give an AS number and BGP access to just anyone[([²]4)] , as we know ever since L0pht Heavy Industries testified before the US Senate, over 2 decades ago, back in 1998.[3] NOTHING has fundamentally changed about this since. We've just written bigger and bigger macros ignorant of why we keep having to fight windmills & having to reinvent the wheel over and over again.
[1 (¹ yes, I did intentionally phrase that in & as a reference to a certain Cisco exam question.)]
[2 see https://news.ycombinator.com/item?id=21108796, but also see http://ict-arcfire.eu/index.php/rina/ & http://ict-arcfire.eu/wp-content/uploads/2018/06/OCARINA-ind... ]
[3 https://youtu.be/VVJldn_MmMY]
[4 (² Of course, I don't advocate that we should. No. I do however argue that each of these situations shouldn't exist in the first place.)]
The kind of jerks who liked adding Javascript to block right-clicking, blocking "Paste" into password fields, etc, are going to absolutely love using the browser-in-a-browser product to deliver their "website".
For the inevitable replies: Yes-- you can already do this with minified Javascript. WASM, being targeted for performance, is just going to make this kind of asshattery faster.
I can easily imagine a "platform" WASM module which acts as a runtime for other WASM modules built by "app" developers. This Platform module can be easily cached by FAANG or other big commercial interests, similar to AMP by Google (maybe even be pre-bundled into the browser?). The only way to discover, download and run these other apps is through this curated Platform module. All this could be rendered through something like the Canvas API instead of the DOM, which again is managed on a low level by the Platform and in turn exposes higher level API's for the Apps. The Platform also has built in support for Ad networks, tracking, etc., which cannot be disabled without disabling the whole ecosystem of apps. And of course, like any good play/app store, it is completely incompatible with anything else, leading to new levels of Balkanization of the web.
I hope that this isn't the case, and I'm completely wrong about this. But I just can't shake the feeling that as a community, we are championing WebAssembly as purely a performance win, without considering how big commercial interests might seek to exploit this new technology.
Edit: typo with AMP
And from there it is just the next step to run a base platform like Android user space or a simple runtime like blazor.
In the end wasm is a runtime. There were physical Java Processors and I am pretty sure there will be webassembly processors.
Thanks for dooming us all, hombre. I'm holding you personally responsible when it happens.
In fact I think it may already have. Took a look at a web based learning portal for my niece the other day, and after poking around in devtoolsfound it was pretty similar to what you described.
Every webpage pays for the traffic in some way, and everybody tries to save web traffic as much as possible (optimizing images, videos, minifying JS, CSS ...). It makes no sense to expect, that websites would turn the opposite way just for fun.
I would be very glad, if you can emulate a computer in a browser, so you can e.g. use VirtualBox or VMWare comfortably in your browser. Still, it will be sandboxed (it can not turn off your computer, or clear your hard drive, etc.).
I think the web is the most open, independent, secure and versatile platform today. And I hope it will become even better and more powerful in the future.
I'm not talking about the sandboxed browser being able to "clear your hard drive", etc. I'm talking about users having no real agency when it comes to controlling the presentation of websites. (Sure, it's code running on your computer. You're free to attach a debugger to step thru native code that runs a VM that runs a nested VM that's ultimately running the code you'd actually like to influence. Let me know how that goes for you.)
Have you seen JSLinux[1]?
Have you not worked in tech long? One of my 'favourite' things about one tech stack I once worked on was that someone decided to implement a file-system inside the a database that is stored on the file system. Even better, the database itself effectively just reimplements the features of a database.
So we have a file-system storing a database that exposes a completely different database that holds a file system.
Browsers are large and complicated software packages for a reason. Best of luck to whoever tries to compete with it in a browser in a browser.
To be honest, it's a unique property of the web for you to be able to easily read the source; however, users of other platforms such as Qt are not expected to read the source. The web is simply moving in the direction of all other UI platforms, so I don't quite understand the outrage. Yes, it sucks, but it'll make apps faster than the Electron mess we have now.
Also, arguments of the form of "X is unique, all others are doing the bad thing" are suspicious to me. This would mean we should cherish and preserve X, bringing the good aspect to other platforms, not devolving it into mediocrity or worse.
What won't be great are sites that do this to enforce ad viewing. But if it becomes common enough, you can expect ad-blockers to act on the frame-buffer of that virtual browser.
Anyway, the most annoying will be the stupid people that do that just because "Even Google does it!" "It's webscale!", or whatever people will be saying 5 years from now.
Sounds like a slippery slope to me. What makes you think users will be satisfied with load times for something like that, which also probably will have a frustrating-to-use interface?
A third party that offers a way to run that on their iPad at the cost of a one-time 100MB download will find customers.
archive.org also might find something like it useful. How else are you going to show old sites in 20 year’s time?
All it will take is for ADK to a have button that produces a wasm version of your app.
This is what people should be worried about, not replacing JS. It became a kind of popular hot-take for a while to say that separation of concerns was a mistake, and that's not how apps get built in the real world, and what we really need is a way to encapsulate all of our DOM and CSS in JS. We need to start pushing back against that idea and keep emphasizing that separation of concerns is really important for end users.
HTML is the interface you write to. It's not a document layout language for authoring, it is a render target that is understandable and manipulable by the end-user. It's a fantastic idea that enables a lot of user-land innovation, and it's one of the biggest reasons why the web is still a relatively good platform to interact with as an end-user.
A lot of architecture decisions on the web haven't aged well, but separating content, styling, and logic was a fantastic architecture decision that is still as relevant today as it ever was. And the rise of the web as an application platform has only made it more important, not less.
React.js has been the most significant JavaScript library in the past decade. Over 50% of JS developers on the web are writing HTML inside JavaScript. In recent years, CSS-in-JS libraries like Styled-components are becoming standard.
We’re killing HTML templates and writing JSX. We’re killing CSS classes and building styled-components. It’s all compiling to HTML, CSS and JS in the end. We just get to do our jobs faster. Why people have a problem with that is beyond me.
Writing a web application is different than writing a web document (or set of documents). I think what people are anxious about here is that many, if not most, sites are documents or sets of documents, but FE trends are "everything is an app", which leads to difficult situations like "in order to talk to my mom on the Internet, I essentially have to buy into the surveillance state".
We haven't done a good job at letting people do only what they want on the web, because so much of the web has been built by companies with ulterior motives. What people are worried about here--and this is true whether we're talking about WASM, Flash, Java Applets, Silverlight, etc.--is this is a giant leap down the road to a co-opted web, devoid of user choice and controlled entirely by moneyed or state interests.
Personally I feel like this happened a long time ago, so WASM doesn't worry me any more than I already am. But I think it's important to keep in mind more is at stake here than CSS-in-JS, or any one FE's career.
Perhaps webassembly doesn't change this either?
they have a problem with it in that unless you are rendering server side your pages won't work for someone who does not want to enable JS.
In the same way it goes against lots of ways the web has worked for 2 decades in regards to user customization, the ability to scrape content etc. so these things are irritating to anyone familiar with how "things used to be" in the same way that new freeway development destroying an ecosystem can be irritating to people familiar with the beauties of that ecosystem.
For what it's worth I also make my living doing the same thing, but I can certainly see the benefits to the other way.
I don't generally have a problem with this. I use JSX at work too. My take is it's just another templating system, I don't think it's special. I don't have anything against templating. To a certain extent, my point is that template systems are good -- put your two-way data bindings and special components in them.
When I write JSX though, I make sure it renders out as semantic HTML. If your approach to JSX is, "I'm going to write my render code in JS and output HTML", I don't have a problem with you, you're doing great. If your approach to JSX is, "my interface is just Javascript, so it's fine to spit out poorly organized divs that are absolute-positioned everywhere", then I think you've missed the point of HTML.
React isn't an antipattern. But React doesn't remove your need to think about the final HTML that your app is going to spit out. Apps like Youtube and Twitter are particularly bad at this -- Youtube's DOM is a horrifying mess, it's completely unreadable and way too difficult to work with or style. It's not because they're using a component system to build it, it's because they're fine treating the HTML as a non-readable render target. They don't care about HTML as a user interface.
The broad mistake people make on the web is to look at HTML and say, "because I can't build a scalable app on JQuery and hand-coded HTML, therefore HTML is bad." Google's hot-take with HTML custom components is not that you could put your template logic inside your other component logic -- it's that the HTML layer shouldn't matter at all and you should stop thinking about it.
Separation of concerns is for users, not for you. It's not about where you organize your logic, or even if you mix up your languages into the same file. It's about what gets spit out onto the final page.
> We’re killing CSS classes and building styled-components.
I would caution a bit against doing styles in JS -- not because adding a template layer or JS layer that generates CSS is bad, but because many codebases that I've worked with use CSS-in-JS as an excuse to rewrite basic CSS controls like hover effects in Javascript and inline the remaining styles. That doesn't mean you shouldn't do CSS in JS, it just means you should take the time to use a library that spits out actual CSS and that allows you to take advantage of native selectors and pseudo-elements. React does itself a disservice by using tutorials that teach people to apply styles directly to elements instead of pointing people towards any of the more decent companion libraries that will let you do styling correctly.
Unless your engineering team is the size of Facebook's and you can't communicate about your components at all, you probably should still be using classes. Of course that doesn't mean you can't use JS to restrict class scope or template that CSS. I personally tend to be more of a proponent of BEM than I am of JS styles, but I'm not going to fight someone over that.
Separation of concerns happens because the render target is a over complicated mess. So separate the good parts out of the mess as much as possible but the mess is still there.
Of course HTML for interfaces is more complicated than something like canvas, because it forces you to think about your UX on a deeper level than "this component should be on the left". The point of HTML is that it forces you to build a universal interface that works for everyone -- it forces you to sit down and build a human-digestible, pure-text tree that communicates your current application state to users.
HTML on its own is a render target. CSS is a completely optional secondary display modifier that you put on top of that render target.
This is why adblockers work, it's why stuff like Reader View works in Firefox. Because it turns out that forcing interfaces to be pure-text has substantial benefits for end-users. Try building something like a UI-level adblocker for a native app. You can't do it -- it's impossible.
I'm with you on speration of concerns but HTML is not for rendering; it's for structuring and adding semantics to your documents. (CSS is for presentation control and JavaScript is for behavior.)
It is a matter of time till we see desktop application hosted in a wasm enabled browser.
Rust on the other hand is doing some genuinely exciting, powerful stuff with allowing WASM to talk to the DOM and allowing native developers to target HTML directly within their apps. Rust's approach is to treat the language like a minimal, drop-in replacement for Javascript that doesn't require you to ship an entire rendering engine alongside it.
It is yet to be seen which approach to web portability is going to win. Obviously I'm rooting for Rust, and I personally think apps that are written using Rust's strategy will nearly always be higher quality than apps written using QT's strategy. But that doesn't necessarily mean that Rust will win, there are a lot of factors at play here. It'll be interesting to see.
But agreed, native apps are definitely coming to the web in some form or another. Funnily, the opposite is also true, since there's been a lot of buzz about using WASM for native sandboxing. I like to think that Gary Bernhardt[0] is pleased about that.
[0]: https://www.destroyallsoftware.com/talks/the-birth-and-death...
The core dynamic of the web, more than document sharing, has been decentralization and removing lock-in and gatekeepers around content distributed through the Internet. As long as we continue to feel that is true (which becomes less believable as browser diversity declines) I feel we should be excited every time the web expands the scope of what can be done with it. It seems likely that 100 years from now, WASM will still exist, and it will allow people to create and distribute software without gatekeepers. If it never existed, or nothing like it did, it seems like the future would have been markedly worse for creators.
Each of these enabling browser technologies that get into the standards should be heralded as having potentially massive positive counterfactual effects on our ancestors and future selves (WebXR being another notable example, landing soon.)
I'm not trying to be overly cynical here, all of these topics remind me of high school when I'd download mildly-sketchy executables, decompile them, and try figure out whether or not they were safe to run. I would love to poke around with reverse engineering wasm, but I'd rather not have to do that every time I visit a new website.
(Trackers using these kinds of tactics was only a matter of time between trackers and blockers...)
(Example of the day, Parsec; example from yesterday, NVidia's driver-update software :( )
There's some incentive to use HTML as it's indexable but it doesn't seem to be affecting mobile, although maybe mobile is in large part successful because of the web? Like the fact that you can post a tweet or a facebook post or a youtube video everywhere. Not sure that HTML is required for that and if it's not then it seems at least possible HTML will die.
Fortunately advertisers are renowned for their self-restraint, and can be trusted to not abuse this capability to simply shove in as many ads as they possible can, wherever they can, with masses of unblockable trackers underneath it all.
What, exactly, is "content"? Are binary formats content? (Audio, Video, etc?) Are textual comments and notes on other text content, or is only the work being commented on content? Are languages that rely on unicode encodings content? Must content be static or can it change frequently, even second by second? (Stock quotes, for example). Must things be easily scrapeable to be content?
Furthermore, must we lock down the structure of content from now until the end of time as HTML or ASCII, or should future generations be allowed to define what they mean by content?
There may be merit in a separation of content from browser, but we should not artificially limit ourselves to content that only fits into our legacy notion of what a browser is. If the browser must become a general purpose OS that can host whatever content people can dream up, I'm all for it.
I think JavaScript and WebAssembly are not so bad programming languages, but I do think that scripts in web pages are overused (regardless what programming language is used, which isn't the issue).
Opaque single page apps are also bad for URL. Also I often use curl when I want to download a file (or in one case, to stream audio), so do not want to have to deal with the web browser to do that.
Also, executing JavaScript when scraping isn't that difficult, depending on what you're using to scrape. Node.js has Puppeteer: https://github.com/puppeteer/puppeteer
I’d love to see a XAML implementation for wasm, because it’s cleaner than html/css.
Looking forward to .NET Core running in warm.
I could see hackability actually improving since data and API access could be well-defined.
I could see Publishers adopting wasm so their websites are more under their control.
But most of all, software development getting out of the scripting business, which has such a wide range of misuse that it becomes expensive and unwieldy to maintain.
If your "web"site can execute arbitrary code, and/or violates HTML guidelines (required for web crawlers or accessibility software to work, or for browsers to tweak the layout) and/or requires JavaScript / Flash / Java to run properly, then it might be part of the Internet, but it's certainly not part of the World Wide Web!
The WWW is a network built on a set of protocols. HTML is a markup language that allows for (indeed, that is is designed to facilitate) embedding and linking binary and executable content, including javascript, flash, java, audio, video, as well as marking up text. HTML is one, but not the only, content-type which can be distributed across the WWW. This is fundamental to the design of HTTP and the intent of HTML and the web itself.
By your rationale, no site created after the addition of the <SCRIPT>, <APPLET> or <OBJECT> tags in HTML could be considered a part of the WWW, which would exclude the entirety of the web after HTML 3.2. This is "not even wrong" levels of ridiculousness.
Yes, one could say that the Web has "jumped the shark" sometimes after 1997 (took some years), remember the mess when Flash and Java applets were everywhere ? (WebAssembly might be more sleek, but the issue is likely to be the same...)
I guess that multimedia could be part of the Web, but support for basic features like audio/video search is (still!) sorely lacking... (Though I'm pretty sure that the capability has been here for more than a decade.)
With the ability to run arbitrary code comes the loss of a common standard, and therefore the inability to communicate, especially for machines - you might have noticed that you hardly see search engines pulling up results for supposedly "Web"Sites like Facebook or Discord or Twitter ?
Fast, safe, and portable semantics:
* Fast: executes with near native code performance, taking advantage of capabilities common to all contemporary hardware.
* Safe: code is validated and executes in a memory-safe [2], sandboxed environment preventing data corruption or security breaches.
* Well-defined: fully and precisely defines valid programs and their behavior in a way that is easy to reason about informally and formally.
* Hardware-independent: can be compiled on all modern architectures, desktop or mobile devices and embedded systems alike.
* Language-independent: does not privilege any particular language, programming model, or object model.
* Platform-independent: can be embedded in browsers, run as a stand-alone VM, or integrated in other environments.
* Open: programs can interoperate with their environment in a simple and universal manner.
Efficient and portable representation:
* Compact: has a binary format that is fast to transmit by being smaller than typical text or native code formats.
* Modular: programs can be split up in smaller parts that can be transmitted, cached, and consumed separately.
* Efficient: can be decoded, validated, and compiled in a fast single pass, equally with either just-in-time (JIT) or ahead-of-time (AOT) compilation.
* Streamable: allows decoding, validation, and compilation to begin as soon as possible, before all data has been seen.
* Parallelizable: allows decoding, validation, and compilation to be split into many independent parallel tasks.
* Portable: makes no architectural assumptions that are not broadly supported across modern hardware.
If webassembly is truly able to meet this goals, together with good debugging and tools, it will become the the universal way to represent computation across devices and platforms.
Really cool.
About 5 years ago I tried to use an optimized javascript file to encode some audio in browser before uploading, it was painfully slow and very hardware intensive. But using wasm in the vmsg [1], it distributes the LAME encoder allowing for in-browser MP3 encoding.
And secondly cloudflare allows for their workers to be written in WASM, allowing for more processor intensive apps (like resizing an image) to be completely on the edge.
I see it as eventually fulfilling the portability goals that java applets in the browsers use to have, but instead of one you have many companies agreeing on the implementation.
[1] https://github.com/Kagami/vmsg
[2] https://blog.cloudflare.com/webassembly-on-cloudflare-worker...
For image uploading, resizing is best done in the browser, so you are not sending large files over a limited mobile connection. The code is ugly (it also needs to fix image rotation by reading EXIF information because browsers are broken, and if supporting IE11 then there is some other voodoo).
I've read articles here and there seen some in person demos, and honestly struggle to understand what it is / how it would / works relative to the current state of JavaScript frameworks.
Often I'm approaching it from a JavaScript framework (React/Vue/Angular) approach as I'm a bit of a noob to the industry and that's generally my day job working on web applications.... and while I read about WebAssembly I wonder about state management and someone tells me "oh you still need to something to do that" and I'm a bit lost on ... how that would work / why I would or wouldn't use one of those frameworks anyway, etc. So many examples I've seen are one off simplified (and for good reason I like those for demos) widgets but ... I'm not sure I've seen them as an application / understand how that would work.
Obviously I'm missing a lot here and feel like on HN I'm often talking to folks who aren't so much front end web devs who are excited about the efficiencies and such but ... not sure how this plays out in a practical sense / relative to the state of web applications as they are now.
https://hacks.mozilla.org/2017/02/a-cartoon-intro-to-webasse...
>> Wait, so what is WebAssembly?
>> WebAssembly is a way of taking code written in programming languages other than JavaScript and running that code in the browser. So when people say that WebAssembly is fast, what they are comparing it to is JavaScript.
A better definition:
> WASM is a binary instruction format for a stack-based virtual machine[1]
This goes into some details that could answer the question raised in this thread:
> WebAssembly modules will be able to call into and out of the JavaScript context and access browser functionality through the same Web APIs accessible from JavaScript.
More useful details:
>> Engineers from the four major browser vendors have risen to the challenge and collaboratively designed a portable low-level bytecode called WebAssembly. It offers compact representation, efficient validation and compilation, and safe low to no-overhead execution. Rather than committing to a specific programming model, WebAssembly is an abstraction over modern hardware, making it language-, hardware-, and platform-independent, with use cases beyond just the Web. WebAssembly has been designed with a formal semantics from the start. [2]
More details from Wikipedia:
>> Wasm does not replace JavaScript; in order to use Wasm in browsers, users may use Emscripten SDK to compile C++ (or any other LLVM-supported language such as D or Rust) source code into a binary file which runs in the same sandbox as regular JavaScript code. ... There is no direct Document Object Model (DOM) access; however, it is possible to create proxy functions for this. [3]
I hope this helps.
It's the closing keynote talk at this year's PyCon India. From the description:
> In this talk, I live-code a simple stack machine and turn it into an interpreter capable of running WebAssembly. I then use that to play a game written in Rust.
https://codelabs.developers.google.com/codelabs/web-assembly...
So far, most of the good demos of it seem to be for gaming. But I think once b2b type companies figure out what it's capable of, a lot will change.
wasm-bindgen docs have some examples to get you started.
WASM requires a lot more knowhow to reverse engineer; I've had to do it a few times for CTFs and some blockchain-related tools that use them, and it's a lot trickier compared to JS.
You have to be willing to edit it, start renaming variables to describe what they hold, rename them again if they get something unexpected assigned, rename functions once you have a good idea of what they're trying to do, and repeat until everything has a name.
Sounds very "open" to me.
But in practice, WASM files will be exported from one tool automatically, and imported into another tool automatically. It is similar to SVG or PDF. You know what to do with it, but you don't care what is inside (what each character inside means).
From my (biased) perspective, one of the greatest potentials is for new languages with improved features, syntax, and semantics. These languages can be just as effective as javascript performance-wise to accomplish the same goals, without having to transpile to javascript, which comes with its own real costs. The interest is there, given the 100+ languages that came out as a 'replacement' for JavaScript since its inception. We'll see if language theory can usher in a better language for the web built on WebAssembly.
The other great boon is simply having more performant services or utilities that can be written in any language. If you like C syntax for doing everything non-ui then you can go ahead and write C no problem, and run it on the client's machine. Maybe we'll even get to run fully concurrent client side apps, at which point Java would be a wonderful language to jump into those capabilities, given they have a robust handle on it from a language standpoint.
WebAssembly In Action (2017) https://www.youtube.com/watch?v=DKHuEkmsx3M
> not sure how this plays out in a practical sense / relative to the state of web applications as they are now.
How I see it it either there will be one big framework (or language) that will have all the things and compiles to WebAssembly or there will be JavaScript libraries that ship compiled WebAssembly which you can interface with.
We are still a long way from having a React like framework which compiles to WebAssembly let alone a thriving community for it.
Basically you're supposed to use it for some performance-critical parts in your application. You're not supposed to use it instead of React.
Wasm proper isn't really a technology for noobs. This is like asking "Does anyone have an introduction to x86_64 machine code, the ELF linker spec and the SysV ABI for noobs?". It's sort of the wrong part of the problem. What you want in that case is "C programming on linux for noobs".
So try googling for "emscripten tutorial" or (if you swing closer to rust) "wasm-bindgen tutorial". There are lots of other languages with wasm targets too, but quality tends to vary a lot.
But really IMHO the reason to use wasm isn't performance. Javascript interpreters are REALLY good these days for routine code. You use wasm when you need to target a big codebase in some other language to a browser, either because it's already written or because the problem area for the code isn't well suited to JS.
Less so an entire overhaul of web applications as it is a powerful tool to be deployed for things we don't do in the browser now because there is no way to / it would be a bear to do in JavaScript due to ... JavaScript being JavaScript ;)
Of course the amount of WASM to traditional JS or anything else could vary from application to application.
Granted we're predicting the future here so obviously it could be off.
This is all still traditional web development. It's just a way to target other runtimes to a sandbox in the same environment.
Albert Einstein
Going to need some clarification on how that's the case.
That said, I don't know of any webassembly decompilers, although I guess they must exist by this point. But also historically decompilers have been imperfect as some of the structure of the code is lost in the compilation process and has to be inferred, sometimes incorrectly, by the decompiler. Compare to a minifier where all you lose is the variable names, comments, and possibly helpful whitespace. All of the structure of the code is still there and there are no heuristics necessary to recreate something that resembles the original source.
https://medium.com/@pnfsoftware/reverse-engineering-webassem...
Cross-compilation has been a thing for a while now -- WASM is the followup to ASM.js, which was already being used as a compile target for languages like C.
Now, reverse engineering ASM.js is easier than reverse engineering WASM (although ASM.js is still a giant pain). And reverse engineering minified Javascript is even easier -- most competent JS engineers could debug a React project without source maps, even if it took them longer.
But it's not clear to me that WASM makes the process meaningfully harder. As in, you're still going to want to use source maps like you use today, and it'll still be totally possible to figure out what a program is doing without the original source. It'll just be a pain.
And the benefits to the web as an open, language-agnostic platform that can be used for memory-intensive tasks outweigh the downsides of needing to work harder to reverse engineer software.
No, we can't, it's a valid criticism, it's not going to go away. Minified JS is bad, webassmbly is worse.
We can debate about minified JS if you so desire but its a different debate.
You real sentence should be "RISC-V instruction set is open, therefor I can see whatever a binaries is doing via the instruction it is executing." Doesn't mean its free, doesn't mean its easy to reverse engineer whatever the binary is doing, but you have everything to do it.
This to me makes the browser vastly preferable to native apps. I didn't realize that the desktop app I use to easily translate languages[0] sends every keystroke to Google Analytics until I had to bother installing a proxy. Meanwhile this analysis is just an Opn-Cmd-I away in the browser.
[0]: https://apps.apple.com/us/app/translate-tab/id458887729
The good parts of the web in terms of debugging is the separation of concerns -- having separate interfaces for CSS, HTML, network requests, and the DOM, and having each of those interfaces be relatively inspectable.
I am a little worried about frameworks that target WASM spitting everything onto a Canvas, bypassing HTML and CSS (coughQt*cought). That would be a substantial loss for the Open web. But I don't lose any sleep over the idea of replacing Javascript.
(Disclaimer: I've never looked at Google's analytics .js files and that may not be possible for some technical reason unknown to me)
You don’t need javascript for this. People reverse engineer native binaries all the time. Reversing wasm isn’t much more difficult than minified javascript as my sibling commenter states.
The main challenge is that variable and function names are not available, but minified js is no better in that regard.
That's not true. There's nothing "free and open" about the tracking code embedded in every modern site, or the javascript blobs you get when you visit Google or Facebook. Minified/obfuscated Javascript is no different from a binary blob, except that it's much less efficient. Your chances of reverse-engineering one of those is about the same as reverse-engineering a wasm blob. Just because one is technically "human-readable" plaintext and the other binary doesn't make a difference, since you can't actually read either of them.
I don't know much about web assembly, but x86, which is much more complicated with thousands of instructions, has been successfully reverse engineered basically since forever. There are decompilers that can automatically reconstruct source code in C or C++ from a binary blob.
Compared to javascript, the best you can hope for is to just format the code so its in a more readable structure, but that isn't going to untangle purposefully obfuscated logic. Add to that the fact that even a regular javascript program is an untyped mess, and it becomes clear that anyone specifically trying to confuse readers will have a very easy time of doing so. There are a lot of messy things you can do in javascript, almost COBOL levels of messy.
Also, I'm curious about this
> but as someone who ran a small business in high school that usually involved reverse engineering obfuscated Javascript,
What type of clients paid you to reverse engineer obfuscated javascript? Malware research? Something else?
Just wait 5 more years that 80% of the web switch to React / Vue / TheNewHypeSPAFramework and with or without WASM, you will be unable to browse "js off".
The blame here is not on WASM but on the abuse of client side rendering and "everything as an App" when most page are just barely interactive documents.
The Web succeeded where Flash / ActiveX / JavaApplet / Sliverlight failed because:
- it was open
- it was document oriented.
And that we tend to forget a bit too easily about it.
Minified Facebook or Google trackers were never libre or meant to be easily reversed. Web apps like Google Drive aren't free either just because you can run it in a browser on Linux. You aren't supposed to (legally?) be able to modify it and nor would you be able to in many cases where they try. It's just as proprietary as Microsoft Office. There are proprietary tools to do even more advanced obfuscation on top of minification (adds red herring code paths that do nothing), which some JavaScript malware vendors use to protect their implementations.
What we really want is libre JavaScript/WASM where vendors include permissive licenses and source maps or links to download the high level source. That's free software. The "free and open" web never really existed de jure; publishers' laziness to obfuscate created a de facto free and open web. Libreness depends on access to high level source, not reverseability, or else Photoshop is free too because you can attach a debugger to it.
WASM just exposes the truth that the web was an app store all along.
You can convert the binary files to/from the text (lisp like) format with readily available tools.
Also, the binary format is easily parsed -- made a parser with katai(sp?) struct in like an afternoon.
If all websites made their source available as well as distributing the binary, there wouldn't be a problem.
That's not the problem source maps are meant to solve. They exist to debug transpiled code.
> The source for GNU and Linux is viewable by everyone, which negates the inability to view what is happening inside a binary.
That's not true. It is non-trivial to verify that the binary you received was built with the source code that's openly available. The point of FOSS is that you always have the option to build your own binaries so that you can be 100% certain of what is running on your machine. Most people aren't going to do that, so they need to place their trust on a third party (like whoever built their kernel). FOSS just makes that trust optional instead of mandatory (like it is with something like Windows)
It stopped working as enforcement ages ago, though, so ️.
More importantly, however, anything GPL must make source available and reasonably accessible. There is no such guarantee or even expectation for random programs on the web.
As for debugging, this is not a particularly hard or fundamental problem. It’s basically solved already.
https://developers.google.com/web/updates/2019/12/webassembl...
Just because you _can_ use the compliation step to (go some way to) hide your source doesn't mean you _have_ to. And relying on your secret sauce being private while you publish it in obfuscated form for all the world to decypher feels like a losing strategy.
I don't think there's any particular reason that WASM has to be more obfuscated than JS. You can already throw a WASM file into a bytecode-to-text translator which is about as useful as deobfuscating a minified JS file, and I assume decompiling/debugging tools will only get better in the future.
For a long time now, I've been thinking of a future where your OS properly isolates all the programs that run on it and even gives us the ability to have direct control over how programs interact with the rest of the system. OS's seem too mired in backwards-compatibility requirements to make big changes like that any time soon, but that's basically the way our browsers already work. Download some code, and execute it (relatively) safely because it's sandboxed from the rest of the system. Our browsers are basically the new OS, and this time around we can do it right using what we learned from OS's (and hopefully backport these browser features into the next generation of OS's).
For example, an app asks for a filesystem handle. You can hand it one that refers to a real location on your OS fs, or you can hand it a completely virtual fs that won't affect anything else on the system.
Whenever an app asks for a resource, being able to hand it a virtual or sandboxed one instead is a huge gain for user-control.
If you really think this is true, look into Google's recaptcha blob.
"By the end of 1990, the first web page was served on the open internet, and in 1991, people outside of CERN were invited to join this new web community.
As the web began to grow, Tim realised that its true potential would only be unleashed if anyone, anywhere could use it without paying a fee or having to ask for permission.
He explains: “Had the technology been proprietary, and in my total control, it would probably not have taken off. You can’t propose that something be a universal space and at the same time keep control of it.”
So, Tim and others advocated to ensure that CERN would agree to make the underlying code available on a royalty-free basis, forever."
Can you see how rude it is to not do the same ?
2. The underlying code of the web's infrastructure is available on a royalty-free basis, and shall remain as such!! There's immense benefit in maintaining this equal-opportunity status-quo.
React for example shifting towards functional programming makes FE apps simpler and predictable.
If you dislike the HTML/CSS UI layer then WA is indeed an alternative. However you need to reimplement everything, like text selection, right click, focus, accessibility, dropdowns, etc, because all you have is a <canvas> to draw on.
But! WA will eventually have DOM access, that will definitely open up the landscape to create new frontend frameworks.
The two largest are Yew (Rust), Vugu (Vue-esque but with Go instead of JS), and Blazor (C#)
https://github.com/yewstack/yew
https://github.com/aspnet/Blazor
All are perfectly viable for production apps as of today and not much more difficult than writing React, given you have some familiarity with their implementation language.
Technical question: 1.) What is the speed of WebAssembly on iOS WebKit and Android WebView? 2.) Is it feasible to write an entire app UI in something like Qt and target WebAssembly? 3.) Android, iOS, Windows versions of the app are Qt apps natively or possibly through the device's WebKit.
Is this possible today? Is there a better UI library than Qt for this?
I can get lit-element with no build process going to prototype something in a single .html file in probably 30 seconds. I don't think XCode/iOS developers can compete with the simplicity. Define initial state, alter it through events, pull data through fetch(), write HTML. I know for a fact iOS development isn't that simple.
EDIT: Here's an article from yesterday about this: https://developers.google.com/web/updates/2019/12/webassembl...
The "arrival" of WebAsm to w3c only means that w3c had finally woken up from eternal slumber and realized that everyone has already implemented The listed features.
I guess in a sense you already have the JS engine installed with, say, Chrome, which is essentially a runtime.
Why can't other runtimes come prepackaged? What am I missing here?
Getting the JS engine to be fast was a tremendous trouble and you wouldn't want to sink as much engineering power into another runtime that will have a smaller reach than the JS engine.
Rather use that skills to make WASM faster.
As if that's not already happening today with obfuscated and minified javascript
Also, the use cases for javascript and webassembly don't overlap enough for one to replace the other in most cases. Javascript is a text-based scripting language you can write in any editor (the sprawling morass that is the current js development ecosystem notwithstanding,) but WASM requires knowing another language and having a compile step, which adds friction and complexity. You can't really replace a language with a bytecode.
It may mean that other languages can be deployed in the browser as easily as javascript, which may or may not be good depending on your point of view.
I personally look forward to the day when Hacker News embeds the Arc runtime as WASM and replaces all of their javascript with Arc code.
That's exactly what I hope happens. There are so many good languages out there that having to be stuck with javascript for the modern web is a crime.
> WASM requires knowing another language and having a compile step, which adds friction and complexity. You can't really replace a language with a bytecode.
I'm not sure that requiring a build step is a real problem, for web 'applications' at least. Plenty of JavaScript frameworks use a build step anyway.
For example, if Adobe delivers its new mobile Photoshop directly via iPad Safari will Apple have a way of 'nerfing' WebAssembly to stop them?
Companies will ask their developers to deliver WASM resources in the name of performance. As a notable side-effect, it will become harder to review how websites work.
Yes, I'm sure there will be reverse-compilation tools for WASM, but still.
It is possible to disable half of the paywalls on the web just by looking at the JS code. I can see why certain parties would push really hard to introduce Webassembly. The performance argument is just a pretext because today's jit compilers for Javascript are really good.
Shhh! Don't tell them!
In practice, there's nothing that WebAssembly offers that could hinder analysis even further. If websites want to be transparent they could provide the sources (akin to providing unminified/unobfuscated JavaScript).
And minified JS isn't particularly more reviewable I think
webassembly is encouraging websites to dump megabytes of binary code in browsers.
Obfuscated, analysis-resistant code to ensure that people cannot disable ads and tracking.
It's appalling that we are accepting this.
For example, it doesn't support arbitrary computed goto:s. Instead it supports block-based control flow, where the semantics can either create blocks or jump out of them to lower down blocks. This makes it possible to construct a CFG statically, which isn't possible if it did support arbitrary computed goto:s.
Please
Pretty much all relevant browsers support wasm (and I suspect data for the few non-compliant Chinese browsers may be outdated, they’re definitely not last released in 2016 or 2017). Do you have some other idea of wasm-capable?
There are more features coming to WebAssembly in the browser, including hopefully a way to feature detect in the WebAssembly binary instead of the way it's done now, which is to compile as many binaries as you need for combinations of features, feature test in JS, and then pick which wasm module to load.
https://github.com/WebAssembly/binaryen/blob/master/src/tool...
* Sandbox. WASM is a sandbox. Think of WASM as a language agnostic replacement for Flash. It is an island in a webpage isolated from that page. This is great for security so that the opaque binary running in that WASM island cannot modify the page in such ways as to violate the same origin policy, such as turning a button into a hyperlink with a user's personal details attached as query parameters on a malicious third party URL. It also means code executing in WASM cannot interact with the surrounding page, which means it isn't a JavaScript replacement. The developers of the WASM standard have been very clear that WASM will not ever be a JavaScript replacement.
To be fair there is work being done, several years in the making, to provide web-like technologies to WASM instances, such as a DOM. This enhancement eases some concerns of overhead, see the next bullet point, but they won't break security or escape the sandbox. This enhancement might go so far as allowing the containing page interaction with the WASM instance, but I suspect this would be limited, if at all, and only cross the sandbox barrier in one direction. There is huge interest in WASM and page interaction, but this is all coming from developers who want alternatives to JavaScript. There is no prevailing business interest in advanced WASM interaction.
* Overhead. Since WASM is a self-contained sandbox the incoming binary needs everything an application binary would otherwise need to execute as an application. For example if a WASM instance wants to pretend to be web page instance then it has to include its own DOM library, interaction code, presentation, and absolutely everything else. If accessibility is a concern you would likely have to reinvent how that works in your WASM instance. While this is of great interest for developers who hate JavaScript there isn't a lot of business value in that.
* Performance. So far WASM has not been able to significantly outperform JavaScript. In many cases WASM code performs slower than JavaScript. Without a large performance differentiation there is little business justification to invest in WASM. Flash's claim to fame is that for most of Flash's life it did significantly outperform JavaScript by an order of magnitude. JavaScript never got faster than Flash, but it did almost completely close the performance gap. Once JavaScript got fast Flash started dying.
---
There is potential business value in WASM, but you have to be willing to abandon any consideration that you are executing in a web page. Selling the idea to business owners that you are executing in a web page, but you need to imagine that you aren't is a tough sell. Here are some ideas that might work better in WASM than the typical web environment:
* Document interaction with digital signatures from physical tokens.
* Streaming media players with embedded DRM.
* Certificate negotiation for an end-to-end encrypted messaging tunnel transmitted via web page.
It can do so via JavaScript glue for now, but there was spec proposals to allow direct access to the DOM.
I bet in few years we will run wasm natively on processor.
The processes would have to translate the wasm into some form of register-based operations anyway, and a JIT compiler might be able to do this more efficiently since it can see a bigger picture than the processor.
Wasm is in this interesting spot where it's relatively low-level, but high-level enough to allow proper sandboxing. It strikes me that the arrangements to accommodate that, like call indirection via tables, could be optimized on CPU level. Not necessarily in a sense of a CPU that directly runs wasm, but rather a CPU architecture which is optimized to be a target for JIT or AOT compilers from wasm.
It looks like Gary Bernhardt was pretty spot on in his talk "The Birth and Death of JavaScript": (https://www.destroyallsoftware.com/talks/the-birth-and-death...)
Instruction caches (and JITs to a degree) solve the same problem in much more general ways. That's why Azul went out of their way to create an appliance to run Java code with custom CPUs, and ended up with a pretty standard RISC for the most part.
All of that applies to WASM machines too.
In contrast, WebAssembly would be a terrible project to implement directly in logic; there are no relative offsets so incrementing an instruct pointer register simply doesn't work, which hardware is very good at. fn calls are referred to by name indexed into a table at the bytecode level, so you're inherently dealing with a level of indirection that would require 'flattening' or inlining the table values to work around before being executed, or, alternatively, you'd have to just bite the bullet and put limits on their size, and eat the cost of the indirection. Similarly, CFG blocks in WASM are represented as literal scoped blocks with nested instructions at the syntax level -- not simple jumps/calls. You need to extract the back/forward edges from the CFG to recover that information and translate it to direct jump operations.
At this point, you are just implementing a compiler, and if you choose to do it in hardware, you are willingly trying to shoot yourself in the foot (or the face), and it will end badly. You're far better off calling a spade a spade and compiling, in software, to a representation that actually can be implemented efficiently in hardware. But you could of course hide this compiler in the firmware to make it "seem" like WebAssembly is the native ISA, and the small surface area of the specification can help ensure you do it safely and correctly. This is probably for the best anyway, because it's dramatically harder to design correct hardware vs correct software.