Just to be clear, these are hidden prompts put in papers by authors meant to be triggered only if a reviewer (unethically) uses AI to generate their review. I guess this is wrong, but I find it hard not to have some sympathy for the authors. Mostly, it seems like an indictment of the whole peer-review system.
Back in high school a few kids would be tempted to insert a sentence such as "I bet you don't actually read all these papers" into an essay to see if the teacher caught it. I never tried it but the rumors were that some kids had got away with it. I just used it to worry less that my work was rushed and not very good, I told myself "the teacher will probably just be skimming this anyway; they don't have time to read all these papers in detail."
I wouldn't say it's wrong, and I haven't seen anyone articulate clearly why it would be wrong.
Doesn't feel wrong to me. Cheeky, maybe, but not wrong. If everyone does what they're supposed to do (i.e. no LLMs, or at least not lazy prompts "rate this paper" and then c/p the reply) then this practice makes no difference.
Is it wrong? That fees more like a statement on the state of things than an attempt to exploit
The basic incentive structure doesn’t make any sense at all for peer review. It is a great system for passing around a paper before it gets published, and detecting if it is a bunch of totally wild bullshit that the broader research community shouldn’t waste their time on.
For some reason we decided to use it as a load-bearing process for career advancement.
These back-and-forths, halfassed papers and reviews (now halfassed with AI augmentation) are just symptoms of the fact that we’re using a perfectly fine system for the wrong things.
I have a very simple maxim, which is: If I want something generated, I will generate it myself. Another human who generates stuff is not bringing value to the transaction.
I wouldn't submit something to "peer review" if I knew it would result in a generated response and peer reviewers who are being duplicitous about it deserve to be hoodwinked.
AI "peer" review of scientific research without a human in the loop is not only unethical, I would also consider it wildly irresponsible and down right dangerous.
I consider it a peer review of the peer review process