logoalt Hacker News

nkriscyesterday at 5:00 PM5 repliesview on HN

> Pangram’s analysis revealed that around 21% of the ICLR peer reviews were fully AI-generated, and more than half contained signs of AI use. The findings were posted online by Pangram Labs. “People were suspicious, but they didn’t have any concrete proof,” says Spero. “Over the course of 12 hours, we wrote some code to parse out all of the text content from these paper submissions,” he adds.

But what's the proof? How do you prove (with any rigor) a given text is AI-generated?


Replies

slashdaveyesterday at 6:33 PM

"proof" was an unfortunate phrase to use. However, a proper statistical analysis can be objective. And these kinds of tools are perfectly suited to such an analysis.

show 1 reply
nabla9yesterday at 5:49 PM

With AI model of course.

They wrote a paper describing how they did it. https://arxiv.org/pdf/2510.03154

whynotmaybeyesterday at 5:27 PM

I wouldn't be surprised to learn that the AI detection tool is itself an AI

show 2 replies
dkdcioyesterday at 5:22 PM

> How do you prove (with any rigor) a given text is AI-generated?

you cannot. beyond extra data (metadata) embedded in the content, it is impossible to tell whether given text was generated by a LLM or not (and I think the distinction is rather puerile personally)

ModernMechyesterday at 5:21 PM

I have this problem with grading student papers. Like, I "know" a great deal of them are AI, but I just can't prove it, so therefore I can't really act on any suspicions because students can just say what you just said.

show 3 replies