I have to say, Ducktape looks like it might have superior resource usage for very low memory situations.
Don't forget Moddable's XS, which does more with even less and just shipped ECMAScript 2019 (ES10) support.
I believe people in the community also have a ChakraCore fork specifically made for using Node on iOS.
Hence I wonder if we can split ignition off v8 to create a standalone fast JavaScript interpreter at the cost of (possibly) more memory consumptions than duktape, that could prove to be useful in many scenarios.
"With Ignition, V8 compiles JavaScript functions to a concise bytecode, which is between 50% to 25% the size of the equivalent baseline machine code. This bytecode is then executed by a high-performance interpreter which yields execution speeds on real-world websites close to those of code generated by V8’s existing baseline compiler."
> 4.7 HTML5 Games, Bots, etc.
> Apps may contain or run code that is not embedded in the binary (e.g. HTML5-based games, bots, etc.), as long as [...] the software [...] only uses capabilities available in a standard WebKit view (e.g. it must open and run natively in Safari without modifications or additional software); your app must use WebKit and JavaScript Core to run third party software and should not attempt to extend or expose native platform APIs to third party software
Note that you can use either UIWebView and JavaScriptCore in your own app, which doesn't JIT, --OR-- WKWebView and its JavaScript interpreter, which does have JIT enabled, but runs in a separate process (not unlike Microsoft OLE out-of-process servers). Apple allows their own trusted apps to JIT (i.e. Safari, which is the same as the WKWebView engine that runs in a separate process). But UIWebView with JavaScriptCode that runs in your app are not allowed to JIT.
You can extend the JavaScriptCore interpreter used by UIWebView (which you can also use standalone without a UIWebView) with your own native Objective C code, that it can call directly via a JavaScript/Objective C bridge. (See NativeScript for example.) But that's impossible to do with WKWebView, whose JavaScriptCore (or whatever it is -- I'm not sure it's the same framework but it might be), because it runs in a different process. All you can do is to send messages (like JSON events or whatever) via IPC over Mach ports, not call your own code directly.
They thought of the loophole already
2.5.6 Apps that browse the web must use the appropriate WebKit framework and WebKit Javascript.
Using V8 to execute JS without 'browsing the web' sounds okay.
And perhaps also this, although it is not clear to me whether downloaded JS scripts are considered 'code':
2.5.2 Apps should be self-contained in their bundles, and may not read or write data outside the designated container area, nor may they download, install, or execute code which introduces or changes features or functionality of the app, including other apps. [...]
https://developer.apple.com/app-store/review/guidelines/
Anything else? I'm curious to learn more..
Before WKWebView, even the Safari-based UIWebView didn't JIT JavaScript. Chrome for iOS was released years before WKWebView was added, so a lack of JIT in their own JavaScript engine would not have made a difference.
Duktape and XS support them. JSC has had them for years now too. That's a big feature to accidentally miss. You switch and then inadvertently blow your stack every now and then because v8 decided to remove their already implemented tail calls for no good reason (Lest we go down the road again, the "alternative syntax" proposal was dropped, so there's zero excuses aside from a deliberate violation of the spec).
(Also, in many common real-world workloads the performance regression is minimal.)
In general, compared to a JIT, an interpreter is faster to start executing code, and can more quickly (and efficiently!) execute code that will only run once.
V8's "Ignition" started as a way to replace the "baseline" JIT in their engine. It can begin executing code while the optimizing compiler gets up to speed and analyzes what needs to be optimized and it can execute code that is extremely likely to only run once (like top level javascript).
The bytecode representation they use for Ignition is also used by their optimizing compiler "TurboFan", which means that they throw away the actual source code after it's been converted to bytecode, saving quite a lot of memory!
All together this means that the Ignition+TurboFan pipeline is faster to start executing, has lower resource usage, and is much simpler than the old stack of a "baseline" JIT (full-codegen) and their old optimizing compiler (crankshaft).
Being able to disable the optimizing JIT entirely is just another bonus of the architecture!
I assume this also doesn't support WASM when the JIT is disabled (or rather, when you can't write to executable memory), but if it did it could be a neat way to write decently performant software for tiny systems with just some JavaScript "glue".
Theoretically yes, but this is not implemented. It should not be too hard to drastically reduce binary size with a build-time flag.
> I assume this also doesn't support WASM when the JIT is disabled (or rather, when you can't write to executable memory), but if it did it could be a neat way to write decently performant software for tiny systems with just some JavaScript "glue".
Correct, wasm is currently unsupported. Interpreted wasm is possible in the future, but would likely be very slow.
And then how portable would the code be? Would this be a path to running node on CPUs without JIT support? Or does it still have to mess with the calling convention at an assembly level?
Sure you can do everything with ROP, but it is less convenient (and Intel CET might eventually make ROP attacks actually hard).
Such as? Any practical examples here?
Code executions is code execution. RWX just lets you execute faster code, it doesn't give you any privileges or permissions you didn't otherwise already have.
The Pegasus spyware for instance utilized a JIT attack in JavaScriptCore in Safari for the initial stage.
JavaScript is very popular the programming zietgiest and likely to be a language non-programmers are exposed to via the web. Part of me wonders if game engines would take to integrating it instead of Lua if designers might be more familiar with it.
[1]: https://techcrunch.com/2019/03/13/spotify-files-a-complaint-...
V8's Ignition interpreter is implemented with "Direct threading", which is quite similar but (probably?) faster on modern processors-- it does an indirect jump to the next bytecode handler instead of a return: https://news.ycombinator.com/item?id=10034167
"The bytecode handlers are not intended to be called directly, instead each bytecode handler dispatches to the next bytecode. Bytecode dispatch is implemented as a tail call operation in TurboFan. The interpreter loads the next bytecode, indexes into the dispatch table to get the code object of the target bytecode handler, and then tail calls the code object to dispatch to the next bytecode handler."
We might be able to finally run more safe cryptography in the browser with constant-time guarantees (there are other concerns with browser-based crypto though).
> Memory consumption only changed slightly, with a median of 1.7% decrease of V8’s heap size for loading a representative set of websites.
What makes you believe is should be anything significant? After all the JIT-compiled code cannot be that large.
This is very possible: just do a search for ${your favorite JIT} arbitrary code execution, and you'll almost certainly see a real-world vulnerability.
> At least HotSpot has been JIT:ing code for decades and no one has been able to find such an exploit.
Yeah, no. See for example https://www.syscan360.org/slides/2013_EN_ExploitYourJavaNati...
1. Find bug that gives you arbitrary read
2. and a bug that lets you write to some arbitrary location
3. Find bug the lets you jump to some location
4. Use [1] to find the location of the RWX region
5. use [2] to copy your exploit code into [4]
6. use [3] to jump to [4]
7. Profit
Often times a single use after free gives you 1, 2, and 3. Essentially you use the UaF to get multiple different objects pointing to the same place, but as different types. e.g You get a JS function allocated over the top of a typed array's backing store, then from JS you have an object that the runtime thinks is a typed array, but the pointer to its backing store is actually pointing to part of the RWX heap. Then all you have to do is copy your shell code into the corrupted typed array, and call the function object.
(This requires a GC related use after free, and most of the JS runtimes have gotten progressively more aggressive about validating the heap metadata, but fundamentally if there's a GC bug it's mostly likely just a matter of how much work will be needed to exploit it)
Tons of JITs have had exploits...
They mention Cobalt (to allow targeting playstation), react native, nativescript, pdfium, and chrome's proxy resolver.
Can someone who uses it happily on the backend talk about why they like it and why it's good?
* 10-day design? Nobody is using Javascript 1.0 anymore.
* Typing is available to various extents thanks to Flow and/or Typescript.
* Slowness... I don't know what you're referring to. Javascript isn't slow.
* null vs undefined. What about them? They are two different things with different meanings.
* Dependency hell. I assume you refer to the many small modules on NPM with dependencies on other modules. Not sure what the problem here is per se. Avoid them if you don't like dependencies.
* NPM insecurity - what?
I like JS on the backend because it's a nice, flexible language to work with, with a healthy and cheap (cost-wise) ecosystem. I get a lot of stuff done very quickly, and I can run my stuff pretty much everywhere.
NPM packages can contain malicious code. There's no NPM review process, and you can't point to specific versions to lock in your own reviews (package administrators can change whatever files they'd like). There's no such thing as a verified-safe dependencies list because the file you reviewed last month might not be downloaded today.
Slowness: Yes, it is. Look at all the benchmarks comparing it to, say, go.
One of JavaScript's big advantages over Ruby and Python was performance, both because the standard runtime is faster (JITing JavaScript turned out to be a lot easier than JITing Ruby and Python) and because its fundamentally asynchronous nature was a better match for webservers.
And although NPM sucks in a number of ways, I've always found it easier to use than Python's dependency management.
Why was this the case?
Using JS in the backend means that you don't have to learn a new language if you're already familiar with it in the browser. Given that webdev is extremely popular with new developers these days, it's not surprising that they might want to reuse the technology when they have to write backend code instead of learning a whole new language. Similarly companies can reuse their pools of webdevs to write non-web applications instead of hiring new personnel or having to retrain the existing coders.
It also means that you can reuse code from the browser in the backend.
Sure if you find JS a clunky and subpar language it might be disappointing to see it spread the way it does but hey, at least it's not PHP!
The async model is easy to use so you get good performance before even optimize it. It comes out of the box with good json serialization/parsing, so that’s one less dependency. Not really sure where you’re coming from.