logoalt Hacker News

mikaraentotoday at 8:30 AM1 replyview on HN

That might be somewhat ungenerous unless you have more detail to provide.

I know that at least some LLM products explicitly check output for similarity to training data to prevent direct reproduction.


Replies

guentherttoday at 4:19 PM

Should they though? If the answer to a question^Wprompt happens to be in the training set, wouldn't it be disingenuous to not provide that?

show 1 reply