logoalt Hacker News

phoronixrlytoday at 3:49 PM1 replyview on HN

[flagged]


Replies

logicprogtoday at 4:46 PM

Regurgitation is pretty rare, and very difficult to coax out, if not even impossible, for things that aren't massively overrepresented in the training set relative to the size of the training set. Even the famous regurgitation paper showed this: while they got most of the models to regurgitate the first book of the Harry Potter series, only Claude 3.7 Sonnet was able to regurgitate any significant portion of any of the other books that had a high nv-recall rate, and basically all of them dropped off precipitously for works like GoT, The Catcher in the Rye, Beloved, and remembered almost nothing about the Da Vinci Code or Catch-22[0]. So you really need huge amounts of examples to get any kind of meaningful regurgitation on any kind of reliable basis. Thus, you'd have to prove that hypothesis.

[0]: https://arxiv.org/pdf/2601.02671