You can probably just compare AOT and JIT Java or C# implementations and get something of an upper bound.
Actual improvements will likely depend on your use case: for short-running programs the overhead of the parsing, JIT machinery & interpretation might dwarf the execution time so even a straightforward compilation full of virtual calls will edge out the pre-JIT interpreter, however for longer-running programs the JIT should be able to devirtualise, inline and optimise the code much better than the AOT compiler will.
Inlining is the most important optimisation, and it's hard to AOT inline dynamic dispatch.
But a lot of JS code executes only once, such as layout code running at app launch. Heck, lots of JS code executes zero times. This code still imposes a cost (parsing, etc).
Consider the complaints about app launch time of Electron apps. A static compiler can be more effective at the runs-once or runs-zero cases.
The difference is that you compile user control flow as-is unless you have overwhelming evidence that you should do otherwise.
And yes you are right. This is promising for run-once code, but all of the cost will be in dynamic things like the envGet. An interpreter can actually do better here because most environment resolution can be done as part of bytecode generation. So it’s possible that this experiment leads to something that is slower than JSC’s interpreter.
https://v8.dev/blog/launching-ignition-and-turbofan https://v8.dev/blog/v8-release-66 https://v8.dev/blog/background-compilation
Which modern JS engine still relies on tracing? I thought they’d all moved on from that technique many years ago, but I’m not an expert in JS.
But maybe there will be some breakthrough, so it’s important to stay open-minded. That breakthrough may be something modest like if this style of JS execution was better for some niche use case.
Thanks to competing world for web browsers, JS runtimes not only efficiently parse to optimized native code but also provide really good JIT compilation benefits.
Speculative optimization for V8 - https://ponyfoo.com/articles/an-introduction-to-speculative-...
Parallel and Concurrent GC - https://v8.dev/blog/trash-talk
Good summary on 10 years of V8 - https://v8.dev/blog/10-years
As described in the links, v8 parses to an AST, which then is compiled to bytecode. A bytecode VM then executes the JS, collecting runtime type information, which is input (along with the bytecode itself) into the next compilation tier; only at that point is machine code generated.
The key idea is that v8 expects to execute the JS code before it can generate native code. It won't generate native code from parsing alone.
The authors indicated at one point it took around 5 PhDs to get it going.
Some thoughts: I think it's easier to target C++ than C, since C++ can help you write more type generic code. I think it's easy to generate tagged unions, then for optimizations try to prove monomorphism. Finally, it may be simpler to start off with support for typescript, and fail to compile if there are any ANY types. I do think it's possible though. JS/TS -> C++ -> WASM (yes, I was out of my mind when I thought of this)
It was this conversation that make me wonder if at some point in the future V8 might have experimental support natively for TypeScript but it makes more sense that compiling web assembly to native binary would make more sense. Who knows? It's an awesome time to be a programmer!
There is a PHP to .NET compiler which probably has similar problems. On second thought that one is probably easier because .NEt has a dynamic runtime.
See https://github.com/timruffles/js-to-c/blob/1befbf4220753576e...
Some projects: 1. https://github.com/fabiosantoscode/js2cpp 2. https://github.com/raphamorim/js2c 3. https://github.com/ammer/js2c 4. https://github.com/NectarJS/nectarjs 5. https://github.com/ovr/StaticScript