logoalt Hacker News

layer801/04/20261 replyview on HN

The problem is that it’s distracting, lowers the quality of the writing, and one has to be cautious that random details might be wrong or misleading in a way that wouldn’t happen if it was completely self-authored.


Replies

naasking01/04/2026

That's just not true, and even if LLMs did introduce more errors than humans, if you can't trust the author to proof read a summary article about his own papers, then you shouldn't trust the papers either.

show 1 reply