This puzzles me; if only ever one value is assigned, I would have expected at least let to perform identically to const in optimised code, because I expect the optimiser to look at the let and say “never reassigned, turn it into a const”. By the sound of it, I’m wrong, and I’d be interested to know why I’m wrong.
let foo = 10; eval("fo"+"o = 10");
I could "obfuscate" that eval assignment as much as I like. So you can't completely statically analyse `let` variables.
That said, it's likely you'd end up with perf very close to `const` in a very "hot" part of your code, since a good JIT compiler like V8 will eventually make "assumptions" about your code, and optimise around them, while having (ideally cheap) checks in place to ensure the assumptions continue to hold.
It's already known that functions containing direct "eval" are not subject to the same level of performance optimisations as other functions. There is no way to obscure the call to direct "eval" itself; the compiler knows clearly whether it occurs.
Without "eval" appearing syntactically inside a function's scope, there is no dynamic access to "let" variables, and there's no need for the JIT code to check the assumption at run time.
Despite no other assignments, the "let" variable does change value: It has the assigned value after the "let", and the "temporal dead zone" value before it in the same scope. However "const" also has this property so it's not obvious why there would be a speed difference.
https://github.com/thlorenz/v8-perf/blob/master/language-fea...
The answer is probably a combination of 'they haven't gotten around to it yet', 'they don't see the need', 'they don't want the complexity', and 'they don't want to spend compile time doing that.'
When comparing the speed of "const" versus "let", the JIT compile time is irrelevant; the speed differences being looked at are entirely run time, inside loops.
Also the JIT compile time difference from "looking at the let" will be so low as to be virtually unmeasurable anyway. (It is such a trivial check, much simpler than almost everything else the compiler does.)
(However, see rewq4321's sibling comment about analysability and JavaScript being a very dynamic language when "eval" is used.)
It was an excellent bug report, complete with a reproducible example.
Only after it recently made the rounds on Twitter and hn was it fixed.
https://mobile.twitter.com/mraleph/status/132175888792258969...
In various numerical computing in Javascript projects/experiments I’ve done, Safari is typically fastest, but not always. At any rate, it is clear that all of these teams have done very extensive optimization work, and all of the modern browsers are engineering marvels.
But there are a lot of weird nooks and corners in all of the browsers when it comes to Javascript performance.
Does anyone have information on using the term "exotic"? I haven't heard that before and not sure how to understand what they meant by that.
The results diverge specifically when your array contains empty items, which will be converted to items containing undefined with the latter expression.
The JS engine would need a specific optimization for cases where:
- The expression is equivalent to arr.slice(0)
- The iterator being de-structured is a vanilla array
- The array doesn't contain any empty items
Of course your code will be faster when all that's required is just a shallow copy.