That's interesting indeed, but just one algorithm is barely representative for an overall performance comparison of programming language implementations.
If we look e.g. at the results of the Are-we-fast-yet benchmark suite (see e.g. https://github.com/rochus-keller/Oberon/blob/master/testcase...) we see that there are significant differences per implementation and benchmark; so if we just pick a single benchmark, we might be in a sweet or sour spot of an implementation just by coincidence.
If we would e.g. just look at the Mandelbrot benchmark in the referenced report we would conclude that LuaJIT is six times faster than Node.js 12 or Mono 5; but this conclusion turns out to be wrong if we look at the geomean over all benchmarks (micro and macro), where LuaJIT is only half as fast as the other two.
A well-designed suite covers as wide a range of different implementation challenges as possible and includes strict rules on how the benchmarks must be implemented in the different languages to allow comparison with as low an error rate as possible.