That being said it's not something that 100% is guaranteed to fix the issue so maybe they did test this and just didn't mention it in the blog.
But I still don't understand, because....
(NB: I'm not a GC expert, just a curious amateur, so my apologies if there are errors in the following, and the opportunity to be corrected in these errors is part of why I'm posting this.)
Regarding the "not much garbage => theoretically times would be shorter", my understanding is that this is actually not how GC works. The GC time is a function of the size of the GC pool, because GC works by walking ("tracing") the tree of live references. So the only way to make GC faster is to have not less garbage, but less stuff allocated at all.
Multi-generational GC works by dividing the whole pool into smaller pools, so that most GC passes only visit the high-churn nursery, but even then some GC passes need to read the
TFA mentions this, where they say "the spikes were huge not because of a massive amount of ready-to-free memory, but because the garbage collector needed to scan the entire [thing we were keeping track of]".
That is, they had virtually no garbage to collect, and that wasn't speeding up the GC. Which is consistent with how all tracing GC works, as far as I know.
Comments/corrections/clarifications are requested!!
Perhaps if the intent wasn't to convince their managers to let them write it in Rust, they would have tried using the latest Go version at the time?
Not to mention, the article made no effort to establish that it's describing the world 2 years prior to this being written