logoalt Hacker News

naasking01/04/20261 replyview on HN

That's just not true, and even if LLMs did introduce more errors than humans, if you can't trust the author to proof read a summary article about his own papers, then you shouldn't trust the papers either.


Replies

layer801/04/2026

I agree with the latter. The fact that they use an LLM for the summary post without rewriting it in their own words already makes me not trust their papers.

show 1 reply