It's the same underlying principle. If I want to ask a software tool what the suicide rate is for my county, I do not expect it to come back with: "Naughty boy! You said an unsafe word! You're getting a strike, and if you get two more, you're banned." This is totally out of the ordinary for a software product, and is absolutely a modern invention. Replace "suicide" with whatever the "AI Safety" obsession word is today.