In my experience, Codex / ChatGPT are better at telling you where you're wrong, where your assumptions are incomplete, etc., and better at following the system prompts.
But more importantly, as a coding agent, it follows instructions much better. I've frequently had Claude go off and do things I've explicitly told it not to do, or write too much code that did wrong things, and it's more work to corral it than I want to spend.
Codex will follow instructions better. Currently, it writes code that I find a few notches above Claude, though I'm working with C# and SQL so YMMV; Claude is terrible at coming up with decent schema. When your instructions do leave some leeway, I find the "judgment" of Codex to be better than Claude. And one little thing I like a lot is that it can look at adjacent code in your project so it can try to write idiomatically for your project/team. I haven't seen Claude exhibit this behavior and it writes very middle-of-the-road in terms of style and behavior.
But when I use them I use them in a very targeted fashion. If I ask them to find and fix a bug, it's going to have as much or more detail as a full bug report in my own ticketing system. If it's new code, it comes with a very detailed and long spec for what is needed, what is explicitly not needed, the scope, the constraints, what output is expected, etc., like it's a wiki page or epic for another real developer to work from. I don't do vague prompts or "agentic" workflow stuff.