An interesting effect I have noted for me when doing research is that even if it only takes one extra click or changing one parameter to generate a plot, or run some extra analysis, I will only do it selectively and start rationalising it to myself that I know when a certain plot or analysis will help and when it won't.
This becomes especially egregious over month to year long experiments where I run the same experiment every day on end.
There was really no reason not to auto generate every possible plot, every possible analysis every time (and I cannot use ipython notebooks or things like that because it's many distributed things chained together with lots of scheduling).
The productivity gains have been enormous and are hard to overstate. I don't dread any experiment any more because even in a large complicated distributed setup, everything from initialising kerberos tickets to tons of config files, restarting services, running multiple experiments dependent on each other, and generating plots and summaries and committing them to a repo is one command. Anything that's analysed once is evaluated always.
I now almost look forward to setting up new experiments because of the pleasure I get from just chaining together calls from my control utilities.
All I have to do is pull on my laptop and I download a filter with all results pre-generated paper ready. I think a lot of people do this in experiments where everything is on a single machine, but I haven't seen it as excessive from other phd students doing complicated distributed stuff. There is always a lot of manual command line args passing, manually changing some config while instead of just creating dedicated scripts, etc.