Thanks.
I forgot to mention that the test case had constant long-running transactions, each lasting 5 minutes. Over a 4 hour period for each tested configuration.
This level of improvement was possible by adding a relatively simple mechanism because the costs are incredibly nonlinear once you think about them holistically, and consider how things change over time. The general idea behind bottom-up index deletion is that we let the workload figure out what cleanup is required on its own, in an incremental fashion.
Another interesting detail is that there is synergy with the deduplication stuff -- again, very nonlinear behavior. Kind of organic, even. Deduplication was a feature that I coauthored with Anastasia Lubennikova that appeared in Postgres 13.