logoalt Hacker News

hodgehog11today at 10:30 AM3 repliesview on HN

I'm amazed that such a simple method of detection worked so flawlessly for so many people. This would not work for those who merely used LLMs to help pinpoint strengths and weaknesses in the paper; there are separate techniques to judge that. Instead, it only detects those who quite literally copied and pasted the LLM output as a review.

It's incredible how so many people thought it was fair that their paper should be assessed by human reviewers alone, and yet would not extend the same courtesy to others.


Replies

bonoboTPtoday at 10:41 AM

I'm not surprised at all. The ML research community isn't a community any more, it's turned into a dog-eat-dog low-trust fierce competition. So much more people, papers, churn, that everyone is just fending for themselves. Any moment that you charitably spend on community service can be felt as a moment you take away from the next project, jeopardizing the next paper, getting scooped, delaying your graduation, your contract, your funding, your visa, your residence permit, your industry plans etc. It's a machine. I don't think people outside the phd system really understand the incentives involved. People are offered very little slack in this system. It's sink or swim, with very little instruction or scientific culture or integrity getting passed on. The PhD students see their supervisors cut corners all the time too, authorship bullshit jockeying even in big name labs etc. People I talked to are quite disillusioned, expect their work to have little impact and get superseded by a new better model in a few months so it's all about who can grind faster, who can twist the benchmarks into showing a minimal improvement etc. And the starry eyed novices get slapped by reality into thinking this way fairly early.

To be clear this is not an excuse but an explanation why I am not surprised.

show 1 reply
jacquesmtoday at 10:48 AM

This is 'spam' all over again. Before spam every email was valuable and required some attention. It was a better version of paper mail in that it was faster and cheaper. But then the spam thing happened and suddenly being 'faster and cheaper' was no longer an advantage, it was a massive drawback. But by then there was no way back. I think LLMs will do the same with text in general. By making the production of text faster and cheaper the value of all text will diminish, quite probably to something very close to the energy value of the bits that carry the data.

everdrivetoday at 10:35 AM

Generally speaking people have worse impulse control than they believe they do. Once you give a tool that does most of the work for you, very very few people will actually be able to use that tool in truly enriching ways. The majority of people (even the smart ones) will weaken over time and take shortcuts.

show 3 replies