> modern ML models
As an aside, this is a hilarious phrasing. What are we going to call these methods in a decade? You'd probably be better off phrasing it as deep neural networks.
Anyway, this is not really true. There are methods (for lots see: https://christophm.github.io/interpretable-ml-book/) and the DeepDream paper came out in 2015, so it's possible.
It's computationally expensive, and a lot of people don't see the value, but my argument is that if you want to use the model in the real world, and have non-technical stakeholders then you'll need to do this, and in general, I've found it to be the best way to actually improve a model.
And to be fair, if you just need to see how the predictions vary as a function of the inputs, you can again hold all but one constant and run a bunch of different values through the model.
Again, this can all be done, but I think it's more a question of will than capability (and hence the context of my original commment).