Might as well say "You can tell by the way it is".
You're staking personal reputation in the output of something you can expect to be wrong. When someone gets a suspicious email, they follow your advice, and ChatGPT incorrectly assures them that it's fine, then the scammed person would be correct in thinking you're a person with bad advice.
And if you don't believe my arguments, maybe just ask ChatGPT to generate a persuasive argument against using ChatGPT to identify scam emails.
2. AI detecting a scam, sure - it’s a scam. AI saying the email is ok… then what? I’d never trust it.