ML doesn't just fail silently by design. because ML is based on error minimization, it fails in a way that is maximally hard to tell from random garbage. This is, remarkably, a subtlety that is lost on most people, which is a real surprise- my introduction to this was in structural biology, where you always do hold-outs and check the performance on the hold-out set before overfitting is such a problem.
Absolutely, the result you get is "the best you can do given the baked-in assumptions". But of course the assumptions can be wrong. And it takes time to learn how to evaluate and revise your assumptions in any analytical field, hard or soft.