It has been proven countless times that it's possible to extract learning data from models. I can't see how you can prove the opposite, except, maybe, with federated learning (but even then, you need to good "ratio" of noise)
Is "innocent until proven guilty" not a maxim in European justice?