I always feel like the most obvious use for it is to start writing truly hateful and abusive code.
I'm sure this is because I'm getting old.
You already can't just read someone's JavaScript code if they're using a transpiler or uglify or something like that, it looks like line noise and you basically have to go through a lot of work to reverse engineer it. WebAssembly is no worse.
The JavaScript environment on the web reminds me of the "walled gardens" that were Lisp machines back in the day, that force you to write code in Lisp, or if you wanted to write in C you would get end up with a bit of a nightmare on your hands. The Lisp machines were beautiful and integrated and the source code was everywhere, but they weren't for everybody, and in the end it was the diversity of Windows, Unix, and Mac OS that replaced them.
WebAssembly is Unix, JavaScript is a Lisp machine.
It's pretty straightforward. Both tend towards the Big Binary Blackbox Blob.
It's true that WebAssembly has advantages Flash, Java, and Silverlight didn't really have in terms of being freely reimplimentable and (potentially) native to the browser. And it's probably a good thing that a browser can be a VM via a target-intended subset of JS. BBBB may be the right thing for some applications.
But to the extent that the browser becomes something developers see primarily as The VM That Lived (and there are clearly a lot of developers in this boat) yes, we're forgetting lessons we should have already learned from the difference between Flash/Java and the open web.
> You already can't just read someone's JavaScript code if they're using a transpiler or uglify or something like that, it looks like line noise and you basically have to go through a lot of work to reverse engineer it. WebAssembly is no worse.
I'm clearly in the minority, but I had qualms about uglify and other source manglers from the beginning for the same reasons people have qualms about Web Assembly: they break the benefits of view source.
I get that they can also be tools that help squeeze out performance, and I use them selectively for that reason, but as far as I can tell, most of the web development world uses this as an excuse to stop actually thinking about the issue -- and for that matter, to stop thinking about how they're putting together their web app ("we're using minification and gzip and doing an SPA using Ember/Angular/LatestDesktopLikeFramework because that's what professionals do now, why do we have performance problems?").
Similarly, I've seen a lot of people use compiling to JS as an end run around what are essentially aesthetic/subjective issues with JS as a language when they'd probably do just as well spending more time learning to use it (as far as I can tell in years of working with all of them, JS is in exactly the same league with Python and Ruby and Perl other similar dynamic languages). That doesn't mean there are no beneficial cases for JS as a target (personally, I'm intrigued by Elm), but I think I'm justified in being afraid that people will use it as insulation from superficial problems.
> that force you to write code in Lisp
Lisp doesn't force you to write code in Lisp. That's one of the reasons why it's awesome -- and potentially horrible. Transpiling/compiling can be similarly awesome and potentially horrible for a lot of the same reasons.
The open web has nothing to do with "view source". It never did. "View source" just makes debugging easier, it's a technical solution to a technical problem. It's why we use JSON or XML instead of ASN.1 for our daily work.
The open web is a web where no company is the gatekeeper. It's a web where we have multiple browsers, competing JS engines, et cetera. No one company can hold the web hostage. This is exactly why Flash, Java, ActiveX, and Sliverlight failed. Every one of those technologies was owned by a single company. Every one of those technologies failed because it's impossible for one company to be the gatekeeper for a technology that is supposed to run on billions of heterogenous devices.
So the lessons of Flash are this: don't put all of your eggs in the Adobe basket.
Meanwhile, you're fighting against WebAssembly because you are ideologically against black box software. Getting rid of WebAssembly does not actually achieve that goal. It's like you're fighting against condoms because they encourage promiscuity. The promiscuity is already there, and condoms just make it a better experience for everyone involved.
----
Footnote: JavaScript is on par with Python/Ruby/Lisp in a lot of ways, but there are some important deficiencies with respect to typing and static analysis, deficiencies which cause actual bugs in real world problems and cost developers time and money to deal with. It's why we've been inventing TypeScript, Flow, Dart, CoffeeScript, et cetera. You say that other people's problems with JS are "superficial", but that's exactly how I see your problems with WebAssembly.
People are going to write code in C++ because they can hit performance targets on platforms where they ship native code, and WebAssembly means a lot to the folks working with Unity, Unreal, or thousands of other existing projects which the new open web. The new open web, with WebAssembly, is an open web with more diversity than ever before, rather than a JavaScript monoculture.
There's a big difference to strongly typed dynamic languages that go out of their way to catch programming mistakes at runtime. With JS you get some "wtf" result when things go wrong and it propagates a long way before manifesting (if it's caught at all). Combine with JS's many weird semantics and special cases that are hard to keep in mind at once...
Recent favourite (minimized case after discovering unexpected piles of NaN's in some faraway place)
> parseInt("0")
0
> parseInt("1")
1
> parseInt("2")
2
> ["0", "1", "2"].map(parseInt)
[ 0, NaN, NaN ]I don't know about Perl, but Python and Ruby both give you a type error for [] + {} or {} + [] or 1 + "1". And both Python and Ruby give you an error when you do something like "".notavalue, whereas javascript just gives you undefined. They might look similar enough visually, but under the hood they are different in terms of strictness and types.
Not really: we have source maps for this.
The benefits are not that great these days. At least not for people who don't support a web ad monetization model or have no interest in being on the VC boat to float model.
A huge part of the project is to ensure decompilation to asm.js
Lisp machines come with Fortran and C compilers.
The irony of this is that almost no UNIX offers a compelling, complete developer integration outside of things which produce slavishly detailed C-compat layers that introduce an ever growing number of undefined behaviors.
It also ignores that many lisp machines actually shipped with compilers for competitive languages. LispMs just had a lot of work (for the time done) on optimization for lisp environments. We take for granted the trivial execution overhead of interactive environments but this was no small feat back in the feat.
Isn't this an argument against WebAssembly? We've already made the mistake once.
Most of the program logic can remain standard Javascript, but the little kernels of hot numerical code can be much more effectively optimized.
This gives Javascript/browsers the ability to handle problems which were previously only possible to tackle with native C programs.
I expect SIMD support can be added to wasm at some future date.
JavaScript will continue to be dominant, but we desperately need to be able to write things in the language of our choice.
There's a reason that C became the dominant language during the 80s and early 90s: it's because Win32, UNIX, and MacOS >=7 were all written in it. That's a large part of what Worse is Better was about. Richard Gabriel founded a company to write software for Lisp Machines, pivoted it to run Lisp on commodity hardware, found that all of his customers would rather just write in C, pivoted it again to do a C++ dev environment, and eventually went out of business.
The renaissance for other languages was really during the web era, when everything just spit out HTML and it didn't matter what the server was written in. Once customers started demanding rich interactivity on the client, there was a strong incentive to write everything in Javascript, and then a strong incentive to write the servers in Javascript too, and then a strong incentive to use Javascript for other things like native apps and IoT devices too.
You are better off re-inventing Java or Python, but in a container and streamed over the Internet from an URL.
I've used a lot of different languages including Java, C#, C++, AVR Assembly, Python and others but JavaScript is my favourite and I would not want to go back.
I think its a shame that some people just didn't seriously try JavaScript. It's a very powerful, expressive language.
Also, testing with JS is amazing - Especially unit testing on Node.js. It lets you do stuff like redefine entire objects, properties or methods at runtime (for stubbing).
Also, JS is great for writing asynchronous logic.
I can't believe people actually like it. It might be understandable if you're comparing it to enterprisey Java, but I'm baffled that anyone could prefer ES5 to Python or Ruby. (I will acknowledge that ES6 puts it somewhere in the area of Python 2.5).
It's an incredibly powerful, expressive language.
Not if you want super advanced features like a hashtable with non-string keys, or checking if two objects are equal.
It lets you do stuff like redefine entire objects, properties or methods at runtime (for stubbing).
As does any other dynamic language.
Also, JS is great for writing asynchronous logic.
As is any other language with first-class functions. And with others you don't have to do silly contortions to work around JavaScript's broken "this".
I think its a shame that some people just didn't seriously try JavaScript.
Tried it, have written it professionally for many years, and as a result am very much looking forward to WebAssembly.
Can you believe you can satisfy every programmer out there with a single language? of course not. Why did you have to use all the languages you listed? Because some made sense in a specific context, other didn't.
> Also, testing with JS is amazing - Especially unit testing on Node.js. It lets you do stuff like redefine entire objects, properties or methods at runtime (for stubbing). Also, JS is great for writing asynchronous logic.
Testing with Python is also amazing. It doesn't matter how amazing it is if I hate writing Python code.
No, the reality is that, in 5/10 years, javascript skills wont matter, only a good knowledge of the DOM and WebAPIs. In fact I'm pretty sure you'll see more opening for C++ developers on the front-end than Javascript ones.
> It lets you do stuff like redefine entire objects, properties or methods at runtime
Sounds horrifying to me, because, as in Ruby[1], library authors will decide that's a good idea. Typeclasses/protocols solve this problem perfectly, while maintaining type safety.
[1]: for some reason, this seems to be less of an issue in Python and Obj-C, even though it's totally doable?
== vs ===
!==
hasOwnProperty
> Also, testing with JS is amazing - Especially unit testing on Node.js. It lets you do stuff like redefine entire objects, properties or methods at runtime (for stubbing).
Doable in Common Lisp for 21 years…
> Also, JS is great for writing asynchronous logic.
ITYM JavaScript has first-class functions. So does Lisp, so does Python, so does Go…
And, no, JavaScript is nowhere near any "powerfull, expressive language". It is embarrasingly low level for a supposedly scripting language and it does not provide any powerful productivity features whatsoever.
JS is also a nightmare for the implementers, it does not have a sane specification, therefore most of the tooling is not comprehensive.
But I'll never understand who thought this asynchronous API was a good idea.
I just wanted to draw pictures in a canvas _in order_, because they should overlap. A common task you could mean. I ended up building a monadic builder for callback chains, that creates a javascript string that is evaled. I felt like this language and the api was incredibly cumbersome, minimalistic and limited. It lacks a blocking api, monad support, dsl support, macros and lazy evalution.
But mayme there is a simple solution to that, that I'm not aware of.
If we turn the web into an anything-goes bonanza using binary code without any keys, credentials, or permissions, WebAssembly (and Turing complete CSS as well) may be well the beginning of end of the web as we know it, giving raise to a new, leaner and more restricted platform (for which some are already on the lookout, BTW).
[Edit] A small real world example: Client asks me to implement a third party plugin to allow them direct communications with their users via their website. A quick scan of the source code tells in a minute that the script isn't just doing that, but is also tracking user behavior and is phoning home related data. Now I can ask the client, if they really want to expose their users to this. With WebAssembly, there's no chance to do so.
You could look at the APIs it's using - why is it calling XMLHttpRequest or looking at the user's cookies? - but you can do the same with binaries, you just have to use a tool, e.g. 'nm -D <binary>' shows you the external functions the program calls.
I held the same position until circa 2008 (?), when JS minifying became truly widespread; nowadays, I think that battle is already lost.
Require the source. Download compiled code from trusted sources.
Download a nuget package; dll's. Download an apt package; binaries and so's. This won't blow up web.
It's not like a bunch of minified JS is going to be "quick" to go through.
Flash presents a single platform with a single vendor that can innovate as quickly as they like. The web platform is inevitably cumbersome and slow in comparison -- over a decade later they're still playing catch-up.
I'm looking forward to WebGL ads that eat battery life with excessive shaders.
I remember watching a vector animation version of a "tell-tale heart" in flash in 1998. On a 28.8 connection it played smoothly full-screen on a 120mhz pentium with 16mb of ram. I remember clicking the play button and having it just miraculously starting to play without any wait, streaming down and uncompressing in real time. I was floored by it. 12 years later I was invited to watch a spiderman animation demo using HTML5 ... there was a 30 second load time, the framerate was probably 1fps, the audio didn't work, the content didn't render properly ...
It's like how Microsoft was able to pull off nearly everything we could do today by shoe-horning their activeX technology into ie3. Just load a bunch of cab files and drop them into the page like OLE components and bam, you got just about everything. The interactivity could bootstrap - that is, not need any extra plugin and you could engage with the other content on the site using vbscript or javascript interfaces in a two-way manner. It was pretty nice.
Netscape retorted with their JVM integration but it just wasn't the same ...
At no point has the debate between Flash and HTML 5 content ever included "Content creators will have an easier time with HTML 5". It's an ecosystem that has to develop over time, and HTML 5 the content platform has only approached feature parity with Flash in the past couple of years.
As opposed to flash ads that eat battery life and bring the browser to its knees?
The whole "anyone can look at it, learn from it" part is gone?
We’re steering towards more proprietary code.
Say, for example, if I want to run Google’s "Star Wars" easter egg in Firefox. With JS, I could grep through the de-uglified and de-obfuscated code quickly, and find that its useragent detection would work if I’d just append "AppleWebKit Chrome/45.0.0.0." to the UserAgent. With WebAssembly, I’d have to spend far more work.
1. Performance critical code 2. Sneaky stuff I really don't want the user to be able to read.
Category two seems like the sort of thing that I would absolutely want to be able to write binary code that executes without user interaction.
It seems like right now the focus is quite rightly on #1, but that #2 seems like it will inevitably become an issue.
The user is already going to have a hard time reading the unuglified JS. If they are looking for "phoning home" they have to search for WebSocket sends and ajax calls. Both of those will have well defined APIs that will be just as easy to spot in dissembled webasm as they are in unuglified JS. I bet they'll be even easier to spot.
> I always feel like the most obvious use for it is to start writing truly hateful and abusive code.
Sure, but it will also developers to have a better experience by allowing them to chose the language they want, and not to be forced to use JavaScript. Choice is good, it's something you cannot deny.
Only if you miss the obvious and blatant technical and license differences.
Current (desktop) operating systems were not designed for a world where you routinely run code from someone on the other side of the planet who you have no relationship to, and don't trust, so you can see cat pics or read a forum.
Browsers have a lot of problems, but their security model and ephemeral install model are inspired designs, which directly enable the safety of the modern internet.
Having to fall back to classic desktop apps for real speed or power is a terrible thing for end-user security. Either browsers need to get more powerful, or desktop OSes need to take on a browser-like security model.
So the process model is not fundamentally different than the browser model, but WebAssembly enjoys two advantages:
1. The browser security model sagely segmented privileges by origin rather than user.
2. Like bytecode, WebAssembly AST does not target a specific processor.
I see no reason why each domain couldn't have a chroot for example, the browser doesn't need to implement those things.
Actually it can sandbox them already.
Web browsers have been like operating systems all along, because they execute programs, albeit with different performance and safety characteristics. That the two are converging upon the same solution to the problem of hosting apps should be reassuring, not concerning.
And before that in the 80s, there was UCSD Pascal. I know it was available for the Apple ][ (used it in high school) and the IBM PC (one of three operating systems available when IBM launched the IBM PC in August of 1981) and probably a few other platforms I'm blanking on. A defined VM and "executables" could run on any platform running UCSD Pascal.
And even before that, IBM pioneered VMs for their own hardware, which is probably what inspired UCSD Pascal in the first place.
- The JVM takes forever to spin up
- The JVM tries to do too much with tons of class libraries
- The JVM is insecure
- The JVM is proprietary. While there are open source
implementations, it is still tethered to Sun and now
Oracle. They call the shots on the features and have
sued both Google and Microsoft for implementing their
own versions.
Similar arguments can be made against Flash.We shouldn't expect WebAssembly to have the same pitfalls, since
- WebAssembly does not take forever to spin up
- WebAssembly doesn't try to do too much. There is no huge
standard library. For now it doesn't even include a GC.
- WebAssembly isn't insecure. Why would it be? I assume
applet exploits are a product of the large standard
library (more attack vectors) and privilege escalation
(certain exploits let you break out of its security
settings to gain control). All of this seems like it's
because web applets are monkeypatched on top of the
existing JVM.
- WebAssembly isn't proprietary.Cross platform is a red herring. The iOS and Android app stores are not cross platform. Ease of use is also increasingly a red herring. The Windows and Mac app stores have been around for a long time and are quite easy to use. Yet they have not ushered in a shift away from Web apps on the desktop.
I think we should be looking at why Web apps have been successful on the desktop rather than pretending they have no advantages.
I think there is a perception of "try before you buy" with web apps that is appealing. Even when apps/programs are free, you feel like you are giving something away by installing them.
The number of layers of in our software stacks grow faster than Moore's law can handle.
Er, because that would be really silly.
I like reading HN from time to time. I would never install an app, because I don't use it frequently enough. I definitely would never go through the pain of installing a HN app every time I wanted to read HN. I really doubt I'm alone or even abnormal in that regard.
That's the beauty of a browser: I can be reading HN in under a second when I want to, with no cluttering of my desktop just so I can read HN from time to time.
I think maybe application sandboxing is an OS job, and the browser should do the caching and invoking of the operating system sandbox.
Because users want it to be. Whenever there isn't a significant performance hit people will always choose the browser solution. The only reason people download apps is because of performance and data limitations. Take those away and people will use the web based version.
You should look at the data on that.
Since Swift is built on LLVM and there's direct LLVM support for WebAssembly, I wonder if Apple will get behind WebAssembly so they can get Swift in the browser.
The reason I ask is that asm.js is really painful and cumbersome to write by hand and wasm seems substantially nicer, but I only have small bits of numerical hot loops which I want to use wasm/asm.js for, and I have no desire to bring a bunch of code written in C into my little project.
For anyone interested, the code is in wasm2asm.h: https://github.com/WebAssembly/binaryen/blob/master/src/wasm...
If you just want to compile a few small functions with hot loops, it might be easiest to write them in C, and use a new option in emscripten that makes it easy to get just the output from those functions (no libc, no runtime support, etc.), see
https://gist.github.com/kripken/910bfe8524bdaeb7df9a
and
1. Web code is portable. The Java dream of “write once, run anywhere” is alive on the web. All you need to run a web app is a device with a browser. No need for a specific CPU architecture or operating system.
2. Web code is accessible. Just click a link and BAM. No need to download anything, no need to install anything, no need to worry about where to put something you might not want later. The web is the lowest friction platform (for users/customers) yet created. Even better, it’s easy to access web data from anywhere in the world on any device: I can read my web email on my grandma’s iPad or on the library’s computer or on a 10-year-old backup laptop, without worrying about whether I’ll have the data I need.
3. Web code is mostly safe. Anything that runs in a webpage is theoretically sandboxed away from harming other webpages or user data stored locally, and web users have come to expect that clicking arbitrary links won’t harm their computers. Untrusted blobs of compiled C code are a completely different story.
4. Web products can very easily be kept up-to-date for all users. This is double-edged for customers, because often website feature changes make later versions more confusing or less effective than earlier versions (cf. most Google product changes from 2008–present). For developers though, it dramatically simplifies support, because every customer can be presumed to be running the latest software version.
The big question is, what do you mean by “everything”? I don’t think everything is being put on the web, or should be. For example, professional “content creation” software is not going to be put on the web anytime in the next few years, because the initial barrier to entry is small in comparison to the required time investment to learn and use the software, and because such applications need hardware access and fine-grained control over compute resources.
Personal anecdote: I like building programming environments- sandboxes for playing with unusual languages. My target audience is people who are interested in programming and generally people who are very computer literate.
I spent about 3 years working on a complete development toolchain for a fantasy game console- compilers, profilers, documentation, examples- the works. It was spread around, and has hundreds of stars on github. Problem: you need to have a java compiler on your system to install and work with the tools. Number of people who developed programs using these tools aside from me: close to zero.
More recently, I built a browser-based IDE for another obscure game console. I made a complete toolchain, wrote docs, loads of examples, etc. This time, though, you could share your programs with a hyperlink, and there was no installation required. You could easily remix other people's programs from a public computer. The difference was huge. Dozens of other people wrote hundreds of programs using this system over a matter of months.
If you're making a project you want to share with other people, a web browser removes friction to a degree which cannot be underestimated. Believe me, I _hate_ working with broken, incompatible and terribly designed browser tech, but removing those barriers to entry is invaluable.
[1] http://www.digitalafro.com/samsung-smart-fridge-serves-up-re... [2] https://productforums.google.com/forum/#!topic/calendar/Uhfp...
Now, what you get with all this is software that runs on any system a capable browser is present. Like how you can open a video or audio on any system with a capable media player.
Basically you will need to drop back down into javascript and handle that there (basically how asm.js does it now).
But there is a proposal to eventually integrate direct DOM access into Web Assembly, but that's for after they get it up and running.
You can also run Ruby and Python in the browser by just compiling their C or C++ VMs. But that won't still work "like JavaScript" - their objects won't be native VM objects in the browser, it won't use the browser's GC, they won't be observable in the browser's debugger, etc.
So far all of that was already possible, and done, with asm.js.
In the future, it is a goal to work to do GC object integration, so that something like Ruby or Python could actually compile down to something with native VM objects, and that would also allow calling DOM APIs directly. (This will likely still require a compiled VM, though.)
If you're using a web page that is mostly just a CRUD app, sure, no-one will wait around for that, but if you're shipping a version of IPython that can run in the browser, I think people would be pretty happy about it as long as the blob can be cached and hopefully shared between sites since it's on a shared CDN.
In its first version, it will not be able to access any/most web APIs directly (so you won't be writing a web app in 100% C++ any time soon).
The goal is to allow "Computationally intensive" bits of code to be compiled, while leaving JS to act as the glue for it all.
In short, the WASM layer is IMO the wrong layer for GC. I think the closest-layered applicable solution is caching & pinning guarantees for common libraries, which may already be addressed by the same solutions for common JS libraries (use a common CDN & let the browser caching keep it pinned).
I think it only makes sense, seeing as interaction with the native JS VM will be inevitable for a long time.
I'm more worried about more specific things like hardware access (GPU, mouse inputs, networking, windowing)
It seems wasm runs at native speeds and take full advantage of optimization, but can it really be a solution fits all? There must be some things wasm can't do. And so far, since JS did almost everything, I don't see the point of wasm if it can't do what other language can.
The main point of wasm, from my perspective, is startup speed. wasm will allow much smaller downloads of large codebases, and much faster parsing (due to the binary format). For small programs this might not matter, but for big ones, it's a huge deal.
thank you for sharing. I am a Computer Science Master student and i would like to contribute to the development. The git looks really full and i don`t know where to start.
For Binaryen specifically, this bug could be a good starting point: https://github.com/WebAssembly/binaryen/issues/2
Other issues in the tracker there as well.
Bigger topics are to make progress on wasm2asm, and to start an implementation of the current binary format (link is in the design repo), which Binaryen needs to support.
Are there plans for a properly WebAssembly LLVM backend that does not depend on forking LLVM (like emscripten)?
If one were to build a Go -> WebAssembly compiler, what are good routes to take? I can see there's going to be multiple possibilities.
But if I'm not mistaken, WebAssembly won't accept just any llvm bitcode. It's similar to how emscripten will work with the bitcode that clang outputs, but not other llvm compilers like rust or ghc.
WebAssembly is to JavaScript what WebGL is to Canvas
Does WebAssembly address either of those points?
And call them "applets". Nobody's ever done that before, right? :trollface:
If an organization doesn't want you reading their JS, there are already plenty of tools to make it nearly impossible as-is. Do you really learn anything from reading minified, obfuscated code? At some point you're just reverse engineering, which is obviously still possible with WebAssembly.
'Open source by default' is a problem to be solved at a cultural level, not a technical one.
[0] https://github.com/WebAssembly/design/blob/master/FAQ.md#wil...
You'll still be able to find and read source of open source web projects, like like you can for open source non-web projects.
> 'Open source by default' is a problem to be solved at a cultural level, not a technical one.
Highly agree.