logoalt Hacker News

More than half of researchers now use AI for peer review, often against guidance

58 pointsby neilvtoday at 5:20 AM37 commentsview on HN

Comments

TomasBMtoday at 9:28 AM

The reasons listed in TFA - "confidentiality, sensitive data and compromising authors’ intellectual property" - make sense to discourage reviewers from using cloud-based LLMs.

There are also reasons for discouraging the use LLMs in peer review at all: it defeats the purpose of peer in the peer review; hallucinations; criticism not relevant to the community; and so on.

However, I think it's high time to reconsider what scientific review is supposed to be. Is it really important to have so-called peers as gatekeepers? Are there automated checks we can introduce to verify claims or ensure quality (like CI/CD for scientific articles), and leave content interpretation to the humans?

Let's make the benefits and costs explicit: what would we be gaining or losing if we just switched to LLM-based review, and left the interpretation of content to the community? The journal and conference organizers certainly have the data to do that study; and if not, tool providers like EasyChair do.

show 5 replies
kachapopopowtoday at 6:34 AM

I think it's interesting that AI is probably unintuitively good at spotting fraud in papers due to their ability to hold more context than majority of humans. I wish someone explored this to see if it can spot academic fraud that isn't in their training data already.

show 2 replies
D-Machinetoday at 5:30 AM

Duplicate: https://news.ycombinator.com/item?id=46281961

show 1 reply
D-Machinetoday at 6:39 AM

Guidance needs to be more specific. Failing to use AI for search often means you are wasting a huge amount of time, ChatGPT 5.2 Extended Thinking with search enabled speeds up research obscenely, and I'd be more concerned if reviewers were NOT making use of such tools in reviews.

Seeing the high percentage of usage of AI for composing reviews is concerning, but, also, peer review is an unpaid racket which seems basically random anyway (https://academia.stackexchange.com/q/115231), and probably needs to die given alternatives like ArXiV and OpenPeerReview and etc. I'm not sure how much I care about AI slop contaminating an area that already might be mostly human slop in the first place.

show 3 replies
baalimagotoday at 6:42 AM

They should do a study on this.

zeofigtoday at 8:25 AM

This is because peer review has become a bullshit mill and AI is good at churning through/out bullshit.

bpodgurskytoday at 6:30 AM

Journals need to find a way to give guidance on what is and isn't appropriate and to let reviewers explain how they used AI tools... because like, you aren't going to nag people out of using AI to do UNPAID work 90% faster and produce results that are 90+th percentile of review quality (let's be real, there are a lot of bad flesh and blood reviewers).

N_Lenstoday at 6:06 AM

News: Half of researchers lied on this survey

show 1 reply