> In theory yes, in practice this almost never happens. 95% of the teams just quickly mash the product together and peace out before anyone notices what mess did they make.
Much of my work is in highly parallelized computing (think Spark across thousands of nodes) processing 10s or 100s of TiB at a time with declarative syntax. It's super cool. Until someone decides they're going to use this one line expression to process data because it's just so easy to write. But it turns out doing that absolutely destroys your performance because the query optimizer now has a black box in the middle of your job graph that it can't reason about.
Bad practices like that occur over and over again, and everyone just figures, "Well, we have a lot of hardware. If the job takes an extra half hour, NBD." Soon, you have scores of jobs that take eight hours to run and everyone starts to become a little uneasy because the infrastructure is starting to fail jobs on account of bad data skew and vertexes exceeding the predefined limits.
How did we get here? We severely over-optimized for engineer time to the detriment of CPU time. Certainly, there is a balance to strike, no doubt. But When writing one line of code versus six (and I'm not being hyperbolic here) becomes preferable to really understanding what your system is doing, you reap what you sow.
On the plus side, I get to come in and make things run 5x, 10x, maybe even 20x faster with very little work. It sometimes feels magical, but it would be preferable if we had some appreciation for not letting our code slowly descend into gross inefficiency.