It doesn't matter. Deep learning have been mainstream for only 10 years. MNIST is a dataset from 1998 and it is still being used in research papers.
The most important thing is to have a constant baseline, and ResNets are a baseline.
Think about changing the model every other year:
- 2015: ResNet trained in Nvidia k80
- 2017: Inception trained in Nvidia 1080 ti
- 2019: Transformer trained in Nvidia V100
- 2021: GTP-3 trained in a cluster
Now you have your new fancy algorithm X and an Nvidia 4090. How much better is your algorithm compared to the state of the art, and how much have you improved compared to the algorithms 5 years ago? Now you are in a nightmare and you have to run all the past algorithms in order to compare it. Or how fast is the new Nvidia card? which noone still have and nvidia has decided to give numbers based on a their own model?