logoalt Hacker News

vladmsyesterday at 7:06 PM1 replyview on HN

Was there an analysis of flawed, low-effort reviews in similar conferences before generative AI models?

From what I remember, (long before generative AI) you would still occasionally get very crappy reviews (as author). When I participated (couple of times) to review committees, when there was a high variance between reviews the crappy reviews were rather easy to spot and eliminate.

Now it's not bad to detect crappy (or AI) reviews, but I wonder if it would change much the end result compared to other potential interventions.


Replies

maxsperoyesterday at 10:13 PM

Anecdotally people are seeing a rise of low-quality reviews which is correlated with increased reviewer workload and and AI tools giving reviews an easy way out. I don't know of any studies quantifying review quality, but I would recommend checking the Peer Review Congress program from past years.