The fact is that performance oriented organizations optimize everything unless they have math telling them it isn't worth optimizing.
The "weakest link" belief is pure conjecture
Most companies don’t have unlimited budgets. Performance-oriented organizations profile and then spend money where profiling tells them to. Shopify isn’t hiring people to contribute to MySQL or Redis internals. They hired a full team to work on Ruby internals, not just creating YJIT, but also on CRuby’s memory layout, hiring the lead on TruffleRuby, and funding academic programming language research on Ruby.
No company has an infinite budget to “optimize everything”. It is clear where internal performance testing pointed Shopify (at Ruby) and with double digit gains being extracted year after year, their profiling didn’t lie. And other Ruby on Rails shops are seeing similar double digit performance wins, not on fake benchmarks, but on actual page load times and traffic that can be handled by a server.
Mercedes 2022 https://www.the-race.com/formula-1/mercedes-2022-f1-car-make...
Williams 2019 https://www.autosport.com/f1/news/williams-modifying-front-s...
Ferrari 2018 https://www.autosport.com/f1/news/how-ferraris-formula-1-mir...
But you picking “mirrors” in your analogy makes it sounds like a premature optimization.
The reason why perf isn’t typically an issue with Rails is the design pattern is to leverage heavy caching.
The use of caching is to address the slowness of Ruby.
If rails were half as fast, you'd need twice as many rails hosts (but no more databases).
Sort of. Twice as many rails hosts means more DB connections which generally means more load/memory on the DB or more load/memory on the external connection pooler.
It's only a bit of incremental load, but it's easy to overlook how many other systems need to run to make Rails scale.