I didn't see it until it was mentioned here.
I can't seem to delete this discussion. So I added the original link here. Please head there.
I've been watching Rust on that Benchmarks Game site for a while — it's been interesting to see it go from worse than Java a year or so ago to competitive with C++. It was slightly beating C++ for a couple days, although just recently C++ took its biggest lead in a while[0]; I think they upgraded the GCC version they were using.
Anyway, I'm very curious whether it ultimately turns out that, as advertised, Rust's performance characteristics really are as good or better than C++'s[1].
[0] http://benchmarksgame.alioth.debian.org/u64q/which-programs-...
[1] The "Benchmarks Game" site very fairly specifies the algorithm that must be used for each benchmark — many of them say data must be operated on sequentially, so IMO Rust is getting a bit of an unfair advantage if the compiler is able to be particularly aggressive at autovectorizing it.
OTOH that is a nice real-world speedup, and anything else that's implemented with LLVM or GCC also has access to that optimization, so YMMV.
From my limited experience, Rust's performance is comparable to C++ throughout, with the added safety guarantees that Rust is known for.
(The number crunching on benchmark game is done with SIMD which Rust is working on)
To be clear, Rust-the-language has not sped up significantly in the past year. It's just that a few dedicated people decided to try to win the benchmarks game strictly for PR purposes [0].
Chances are, for very algo-heavy (i.e. unrealistic) workloads, there are two ways to write it in Rust - fast and safe or very fast and unsafe. (e.g. one can disable bounds checking and get maybe a 2% speedup).
All this really serves to show is how artificial the benchmarks are.
To say something about whether Rust-the-language-implementation was building faster executables wouldn't we compare the same Rust programs built with different Rust versions (not compare Rust programs against Java or C++ programs).
Match-up measurements of the same programs, from the 1st June 2015 and 1st June 2016 data files, like this:
elapsed secs program id
4.013 => 3.780 binarytrees #1
15.449 => 16.650 fannkuchredux #2
5.204 => 5.588 fasta #1
0.070 => 0.067 meteor #1
0.050 => 0.044 meteor #2
24.630 => 24.068 nbody #1
1.733 => 1.743 pidigits #1
436.686 => 288.131 threadring #1
:and then note that the measurements were made on different OS versions, probably with different LLVM versions.Perhaps you wish to draw our attention to the question of "ecological validity"?
http://benchmarksgame.alioth.debian.org/why-measure-toy-benc...
http://benchmarksgame.alioth.debian.org/dont-jump-to-conclus...
Perhaps you should check if the programs mentioned in that blog post are actually shown on the benchmarks game website :-)
(Also check if those tasks contribute to the summary comparisons and charts.)
I don't think so.