I found a way to 'de-smell' LLM copy: tell it to take a second pass that processes the text output with the William Burroughs cut-up method. Works well for a small subset of use cases.
Presumably the smelly AI text problem is just ... a problem that will be solved. Or maybe we'll just get used to it.