logoalt Hacker News

New arXiv policy: 1-year ban for hallucinated references

280 pointsby gjuggleryesterday at 8:39 PM85 commentsview on HN

Comments

btownyesterday at 9:12 PM

> The penalty is a 1-year ban from arXiv followed by the requirement that subsequent arXiv submissions must first be accepted at a reputable peer-reviewed venue.

This is incredibly good for science. arXiv is free, but it's a privilege not a right!

I'm not seeing this clearly listed on https://info.arxiv.org/help/policies/index.html so it's possible this is planned but not live yet - or perhaps I'm not digging deeply enough?

As a certain doctor once said: the whole point of the doomsday machine is lost if you keep it a secret!

show 2 replies
rgmerktoday at 12:29 AM

Good.

If it’s not worth your time to check the output of your LLM carefully, it’s not worth my time to read it.

MinimalActionyesterday at 11:18 PM

There needs be to a careful vetting before such adverse actions. If somebody includes a name and pushed it without express permission, does everyone get the ban? I agree that implemented the right way, this is good.

noobermintoday at 12:07 AM

Seeing the usual LLM hypers angry replying to this on twitter is such a tell. Just like the comments on the LLM poisoning articles, some people just can't accept that some people don't like LLMs and get upset when you put any amount of hindrance to their rapid acceptance.

bigfishrunningyesterday at 9:04 PM

Good; academic literature is in crisis because of all of the slop. Forcing some consequences on easily-detectable hallucinations can only be a good thing

show 1 reply
nullctoday at 1:01 AM

It's been pretty eye opening watching Craig Wright (of bitcoin fakery fame) flooding out LLM generated 'academic' papers and even having some of them accepted.

He's toast if SSRN were to adopt a similar policy.

squirrelonyesterday at 10:43 PM

Had a colleague submit a paper with literal AI slop left in the text, got hit with a nasty revision request. Check your drafts before you submit, people. The reviewers will find it.

show 2 replies
jszymborskitoday at 12:14 AM

Should be more harsh in my opinion.

hyunwoo222today at 1:18 AM

[flagged]

random3yesterday at 9:36 PM

It seems a good idea to ban cheating, but how hard is it, especially in new reasoning/agents contexts to validate references?

The deeper question is whether legitimate AI generated results are allowed or not? Test - In the extreme - think proof of Riemann Hypothesis autonomously generated (end to end) formally proven - is it allowed or not?

show 8 replies