logoalt Hacker News

DroneBetterlast Saturday at 3:29 PM1 replyview on HN

the problem is generally the same as with generative adversarial networks; the capability to meaningfully detect some set of hallmarks of LLMs automatically is equivalent to the capability to avoid producing those, and LLMs are trained to predict (ie. be indistinguishable from) their source corpus of human-written text.

so the LLM detection problem is (theoretically) impossible for SOTA LLMs; in practice, it could be easier due to the RLHF stage inserting idiosyncrasies.


Replies

arendtiolast Saturday at 7:21 PM

Sure, having a 100% reliable system is impossible as you have laid out. However, if I understand the announcement correctly, this is about volume, and I wonder if you could have a tool flag articles that show obvious signs of LLM usage.