That's just not true, and even if LLMs did introduce more errors than humans, if you can't trust the author to proof read a summary article about his own papers, then you shouldn't trust the papers either.
I agree with the latter. The fact that they use an LLM for the summary post without rewriting it in their own words already makes me not trust their papers.
I agree with the latter. The fact that they use an LLM for the summary post without rewriting it in their own words already makes me not trust their papers.