ChatGPT is incredibly good at generating plausible text, but nearly all of the time it fails to hold up to detailed scrutiny. It writes like a human who doesn't know what they are talking about, but have chosen to bullshit something instead of admit they don't know.
At least for me, I would have expected more understanding to be necessary to get to ChatGPT levels of plausibility. I would not have expected a bot that does not understand two digit multiplication at all to be able to say anything vaguely convincing about primes, but ChatGPT can.
Being thoroughly surprised by how ML improves and then restating your expectations is not the same as raising the bar.