Writing reviews isn’t, like, a test or anything. You don’t get graded on it. So I think it is wrong to think of this tool as cheating.
They are professional researchers and doing the reviews is part of their professional obligation to their research community. If people are using LLMs to do reviews fast-and-shitty, they are shirking their responsibility to their community. If they use the tools to do reviews fast-and-well, they’ve satisfied the requirement.
I don’t get it, really. You can just say no if you don’t want to do a review. Why do a bad job of it?
The "cheating" in this case is failing to accept one's responsibility to the research community.
Every researcher needs to have their work independently evaluated by peer review or some other mechanism.
So those who "cheat" on doing their part during peer review by using an AI agent devalue the community as a whole. They expect that others will properly evaluate their work, but do not return the favor.
As I understand it, the restriction of LLMs has nothing to do with getting poor quality/AI reviews. Like you said, you’re not really getting graded on it. Instead, the restriction is in place to limit the possibility of an unpublished paper getting “remembered” by an LLM. You don’t want to have an unpublished work getting added as a fact to a model accidentally (mainly to protect the novelty of the authors work, not the purity of the LLM).
> If they use the tools to do reviews fast-and-well, they’ve satisfied the requirement.
That's a self-contradicting statement. It's like saying mass warrantless surveillance is ethical if they do it constitutionally.
> Writing reviews isn’t, like, a test or anything. You don’t get graded on it. So I think it is wrong to think of this tool as cheating.
Except that since last year, it kind of is. It is now mandatory for some large conferences (such as CVPR) for authors to do reviews if they submit a paper. Failure to review, or reviews that are neglectful, can lead to a desk reject of their submission.