I wish they wouldn’t call it “AI slop” before acknowledging that most of the bugs are correct.
Let’s bring a bit of nuance between mindless drivel (e.g. LinkedIn influencing posts, spammed issues that are LLMs making mistakes) vs using LLMs to find/build useful things.
It can be correct and slop at the same time. The reporter could have reported it in a way that makes it clear a human reviewed and cared about the report.
Slop is a function of how the information is presented and how the tools are used. People don't care if you use LLMs if they don't tell you can use them, they care when you send them a bunch of bullshit with 5% of value buried inside it.
If you're reading something and you can tell an LLM wrote it, you should be upset. It means the author doesn't give a fuck.
If I read the sentence correctly they're saying that past reports were AI slop, but the state of the art has advanced and that current reports are valid. This matches trends I've seen on the projects I work on.
I think they are saying what you want them to say. In the past they got a bunch of AI slop and now they are getting a lot of legit bug reports. The implication being that the AI got better at finding (and writing reports of) real bugs.