I was already worried about ChatGPT-like systems generating mass-produced nonsense and polluting the internet, but if people are also going to edit ChatGPT output just enough to make it seem right (a mechanism I hadn't thought of so far), that might make the nonsense a lot harder to detect.
I totally understand the reasoning though, it sounds like a productive workflow.