Thing is, this has always been the case. One of the problems with LLM-assisted coding is the idea that just because we're in a new era (we certainly are), the old rules can all be discarded.
The title doesn't go far enough - slop (AI or otherwise) can work and pass all the tests, and still be slop.
IMO LLMs are forcing us in the other way.
To get the maximum ROI from LLM-assisted programming it needs proper unit tests, integration tests, correctly configured linters, accessible documentation and well-managed git history (Claude actually checks git history nowadays to see when a feature was added if it has a bug)
Worst case we'll still have proper tests and documentation if the AI bubble suddenly bursts. Best case we can skip the boring bits because the LLM is "smart" enough to handle the low hanging fruit reliably because of the robust test suite.
The difference is that if it works and passes the tests I don't feel like it's a total waste of my time to look at the PR and tell you why it's still slop.
If it doesn't even work you're absolutely wasting my time with it.