This is the kind of situation where everything sucks. You'd think that one of the biggest AI conference out there would have seen this coming.
On the one hand (and the most important thing, IMO) it's really bad to judge people on the basis of "AI detectors", especially when this can have an impact on their career. It's also used in education, and that sucks even more. AI detectors have bad rates, can't detect concentrated efforts (i.e. finetunes will trick every detector out there, I've tried) can have insane false positives (the first ones that got to "market" were rating the declaration of independence as 100% AI written), and at best they'll only catch the most vanilla outputs.
On the other hand, working with these things, and just being online is impossible to say that I don't see the signs everywhere. Vanilla LLMs fixate on some language patterns, and once you notice them, you see them everywhere. It's not just x; it was truly y. Followed by one supportive point, the second supportive point and the third supportive point. And so on. Coupled with that vague enough overview style, and not much depth, it's really easy to call blatant generations as you see them. It's like everyone writes in linkedin infused mania episodes now. It's getting old fast.
So I feel for the people who got slop reviews. I'd be furious. Especially when its faux pas to call it out.
I also feel for the reviewers that maybe got caught in this mess for merely "spell checking" their (hopefully) human written reviews.
I don't know how we'll fix it. The only reasonable thing for the moment seems to be drilling into everyone that at the end of the day they own their stuff. Be it a homework, a PR or a comment on a blog. Some are obviously more important than the others, but still. Don't submit something you can't defend, especially when your education/career/reputation depends on it.
It also permeates culture to the point that people imitate the LLM style because they believe that's just what you have to do to get your post noticed. The worst offender is that LinkedIn type post
Where you purposefully put spaces.
Like this.
And the clicker is?
You get my point. I don't see a way out of this in the social media context because it's just spam. Producing the slop takes an order of magnitude less effort than parsing it. But when it comes to peer reviews and papers I think some kind of reputation system might help. If you get caught doing this shit you need to pay some consequence.
Not just spell checking, but translation. English is not the first language for most of the reviewers.
But you can see the slippery slope: first you ask your favorite LLM to check your grammar, and before you think about it, you are just asking it to write the whole thing.