On one hand, yes: expanding bullet points to slop makes things strictly worse.
On the other hand, if one uses AI but keeps content density constant (e.g. grammar fixes for non-native speakers) or even negative (compress this repetitive paragraph), I think it can be a useful net productivity boost.
Current AI can't really add information, but a lot of editing is subtracting, and as long as you check the output for hallucinations (and prompt-engineer a lot since models like to add) imo LLMs can be a subtraction-force-multiplier.
Ironically: anti-slop; or perhaps, fighting slop with slop.
On one hand, yes: expanding bullet points to slop makes things strictly worse.
On the other hand, if one uses AI but keeps content density constant (e.g. grammar fixes for non-native speakers) or even negative (compress this repetitive paragraph), I think it can be a useful net productivity boost.
Current AI can't really add information, but a lot of editing is subtracting, and as long as you check the output for hallucinations (and prompt-engineer a lot since models like to add) imo LLMs can be a subtraction-force-multiplier.
Ironically: anti-slop; or perhaps, fighting slop with slop.