You're confusing multiple different systems at play.
OpenAI has specific goals for ChatGPT, related to their profitability. They optimize ChatGPT for that purpose.
ChatGPT itself is an optimizer (search is an optimization problem). The "being helpful and accurate text generator" is not the goal ChatGPT has - it's just a blob of tokens prepended to the user prompt, to bias the search through latent space. It's not even hardcoded. ChatGPT has its own goals, but we don't know what they are, because they weren't given explicitly. But, if you observed the way it encodes and moves through the latent space, you could eventually, in theory, be able to gleam them. They probably wouldn't make much sense to us - they're an artifact of the training process and training dataset selection. But they are there.
Our goals... are stacks of multiple systems. There are the things we want. There are the things we think we want. There are things we do, and then are surprised, because they aren't the things we want. And then there are things so basic we don't even talk about them much.