If we assume that all projects have a finite amount of engineering effort available, then triaging is expected. The best practice for production web applications for decades has been to serve them from behind a reverse proxy. From there, it is usually fairly trivial to use path matching to serve static files from the reverse proxy itself, a tool that is much better suited to this purpose and can easily saturate any link you throw at it without breaking a sweat.
It seems perfectly reasonable for the maintainers of these web frameworks to defer improving the performance of static file serving indefinitely given this.
The assumption that nginx is always there no longer hold, specially in microservices, ex. Running behind haproxy (does not service static files) or running bechind cloud provides like AWS ALB.
https://peps.python.org/pep-3333/#optional-platform-specific...
And no, I’m not going to fix your benchmarks for you. This does not provide any value to me but does come at a time cost.
I appreciate you raising the issue and performing research on it, that’s valuable and I applaud you for that. I simply don’t believe that this is a valid benchmark from a technical soundness perspective.