People who are doing those harmful things with AI aren’t going to stop because of a policy. They are just going to lie and not admit their submissions are AI generated.
At that point, you will still have to review the code and reject it if it is bad quality, just like you had to without an AI policy. The policy doesn’t make it any easier to filter out the bad faith AI submissions.
In fact, if we DO develop an efficient way to weed out the bad faith PRs that lie about using AI, then why do we need the policy at all? Just use that same system to weed out the bad submissions, and just skip the policy completely.