LLMs won't add information to context, so if the output is larger than the input then it's slop. They're much better at picking information out of context. If I have a corpus of information and prompt an extraction, the result may well contain more information than the prompt. It's not necessarily feasible to transfer the entire context, and also I've curated that specific result as suitably conveying the message I intend to convey.
This does all take effort.
My take is also that I am interested in what people say: I have priors for how worthwhile I expect it to be to read stuff written by various people, and I will update my priors when they give me things to read. If they give me slop, that's going to affect what I think of them, and I expect the same in return. I'm willing to work quite hard to avoid asking my colleagues to read or review slop.