The problem is that it’s distracting, lowers the quality of the writing, and one has to be cautious that random details might be wrong or misleading in a way that wouldn’t happen if it was completely self-authored.
That's just not true, and even if LLMs did introduce more errors than humans, if you can't trust the author to proof read a summary article about his own papers, then you shouldn't trust the papers either.
That's just not true, and even if LLMs did introduce more errors than humans, if you can't trust the author to proof read a summary article about his own papers, then you shouldn't trust the papers either.