That does somewhat depend on the size of the context.
LLMs won't add information to context, so if the output is larger than the input then it's slop. They're much better at picking information out of context. If I have a corpus of information and prompt an extraction, the result may well contain more information than the prompt. It's not necessarily feasible to transfer the entire context, and also I've curated that specific result as suitably conveying the message I intend to convey.
This does all take effort.
My take is also that I am interested in what people say: I have priors for how worthwhile I expect it to be to read stuff written by various people, and I will update my priors when they give me things to read. If they give me slop, that's going to affect what I think of them, and I expect the same in return. I'm willing to work quite hard to avoid asking my colleagues to read or review slop.
> LLMs won't add information to context, so if the output is larger than the input then it's slop
That doesn't align with my observations. A lot of times they are able to add information to context. Sure it's information I could have added myself, but they save me the time. They also do a great job of taking relatively terse context and expanding upon it so that it is more accessible to someone who lacks context. Brevity is often preferable, but that doesn't mean larger input is necessarily slop.