Awkward tie-ins between SBF and value systems (?) have no effect on practical usage.
A theoretical concern they might train on my API data after saying they won't doesn't either. Amazon might be training on everything not bolted down in S3, not worth wasting brain power on that.
The moderation API isn't some magic gotcha, it's documented. They don't want to deal with people fine tuning for porn. Maybe you have some ideological disagreement on that but it's not of practical relevance when trying to write code.
At the end of the day you're not alone in these opinions. But some of us prefer pragmatism over hype. Until someone catches OpenAI or Anthropic trying to kill their golden goose by breaking their GDPR, HIPPA, and SOC2 certifications, I'm going to take delivered value over theoretical harm.