It's just a tool.
Writing papers is exhausting, and if the data and results are real, then what's the problem? If the human author checked the output, is that not the same as a human writing the prose?
Everyone in the field will be doing this in a few years anyway. It's a shame that this Salem Witch Trial is happening for the early adopters.
If the findings are being fabricated or the paper isn't being reviewed and corrected by the author, that's a different story. But I'd be shocked if that were the case.
This has nothing to do with whether it is ok to use AI or not, it is about whether it is ok to lie about using it.
This comment doesn't seem to fit the discussion at all?
The discussion is not about humans using LLLs to write papers. It is about humans who agreed not to use LLVM in reviewing papers, then did exactly that.
A hammer can be used to build a house, or to kill a person. We have a lot of history, law, and culture (likely more), around using tools like hammers so that we know what is good use vs what is bad. The above applies for many others tools as well.
LLMs can be very useful tools. However we also know there are a lot of bad uses and we are still trying to figure out where there are problems and where there are none.
They agreed to the no LLM policy.
> what's the problem?
Read the article. They self-selected into the no-LLM group and then copy/pasted from an LLM. Not only dishonest but just not smart.
I consider LLMs to be a very useful tool and use them every day. But if I sign a slip of paper saying I won't use them for some project, and then use them anyway, not merely using them but copying without even the pretense of putting it into my own words, then that's fraud. LLMs being a tool is completely orthogonal to this fraud.