It doesn't matter what context I think about it in. It isn't going to work! And it will make things worse for everyone involved.
Hypothetically let's say we get to a point where everyone believes the detection is 100% accurate. Well that's all that means: everyone believes it. Meanwhile AI has just gotten better, and we're all more fooled than we were before. All we are really accomplishing is enhancing the training necessary for AI to elude detection.
And there will be an inherent bias toward false positives, because high detection rate will be the selling point. The truth is secondary, and there's no way to verify the results.