I do not know enough about statistics to make a (negative) quality statement about it. I know a bit more about machine learning though, and there I also see things like: Picking the most favorable cross-validation evaluation metric, comparing to "state-of-the-art" while ignoring the real SotA, generating your own data sets instead of using real-life data, improving performance by "reverse engineering" the data sets, reporting only on problems where your algo works, and other such tricks. I believe you when you say much the same is happening for statisticians.
Maybe it was my choice of words (careful, sober). I think its fair to say that (especially applied) machine learners care more about the result, and less about how they got to that result. Cowboys, in the most positive sense of the word. I retraced where I got the cliff analogy. It's from Caruana in his video "Intelligible Machine Learning Models for Health Care" https://vimeo.com/125940125 @37:30.
"We are going too far. I think that our models are a little more complicated and higher variance than they should be. And what we really want to do is to be somewhere in the middle. We want this guy to stop and we want that statistician to get there, together we will find an optimal point, but we are not there yet."