I wish we reach a point were we expect (as a matter of online etiquette) upfront disclaimers on predominantly AI-generated articles, so that we can save a few seconds and directly get our agents to read and summarize them.
Even when it's not slop, the verbosity of poorly edited AI-generated content is a micro-agression against readers. The prompter expects readers to read what they couldn't be bothered to properly edit.
Pushing AI-slop code without review, and without explicit warnings is a macro-agression against your colleagues, collaborators, and future agents. You are expecting everybody around you to maintain/ refactor, what you couldn't be botherered to review.
> micro-agression against readers