A lot of young data scientists and analysts (and IT types in general) code in a way that solves the immediate problem.
But with a bit of time you start to realise that the initial brief is anyways part 1, an executive/customer will change their mind 15 times before the end of the project. What seems like the same problem now will not be the same problem in two weeks, let alone two years. Doubly so if you're interacting with entities or sources that aren't software engineers.
Over the long run, excessive modularisation creates what I'll call Frankenstein programs. You've been tasked with making a man, and what you end up with is a shambling golem made from rotting, pulsating, mutating parts all stitched and held together. If you're really unlucky, it will evolve further into the akira/tetsuo program, where you begin to lose control of the mutations until it self destructs under its own complexity.
The interesting part is that the answer to this can also be partly found in nature: you modularise and specialise, but you also make strategic choices where you're deliberately redundant.
Too much redundancy is spaghetti code. Modularisation and structure save you there.
Not enough redundancy leaves you vulnerable to changes in your environment and mutation as the project ages and evolves.
As I've gotten older, I'm placing more and more value on the later. Your mileage may vary...