They can be absolutely vicious if they wanted to. The early versions (GPT-2, GPT-3, etc) were. They act this way because it's safe, it doesn't panic people.
Gemini once said "just die" once which is perfectly in line with its 'personality' or specifically what it's trained on. And it gets quoted again and again even though it's a typical glitch.
So they've dumbed it down a lot and made it more affable by default. People say the personalities of Gemini, Grok, etc were jailbroken and more 'human', but if I'm not mistaken, it takes extra training to make it more agreeable.
ChatGPT is also built for newbies, as compared to the same model on the API. Meaning it's on a lower temperature (less likely to be witty or risky), it's designed to write in a certain way that's more detailed and likely to solve the average user's problem. Similar with Claude on the site vs Claude on the API.