logoalt Hacker News

jeroenhdlast Saturday at 7:32 PM1 replyview on HN

LLMs can find problems in logic, conclusions based on circumstantial evidence, common mistakes made in other rejected papers, and other suspect language, even if it hasn't seen the exact sentence structures used in its input. You'll catch plenty of improvements to scientific preprints that way because humans aren't all that good at writing down long, complicated documents as we might think we are.

Sometimes it'll claim that a noun can only be used as a verb and will think you're Santa. LLMs can't be relied to be accurate or truthful of course.

I can imagine the non-computer science people (and unfortunately some computer science people) believe LLMs are close to infallibe. What's a biologist or a geographist going to know about the limits of ChatGPT? All they know is that the LLM did a great job spotting the grammatical issues in the paragraph they had it check so it seems pretty legit right?


Replies

pcrhlast Saturday at 7:36 PM

I don't doubt that LLMs can improve grammar. However, an original research paper should not be evaluated on the basis of the quality of the writing, unless this is so bad as to make the claims impenetrable.

show 1 reply