That might be somewhat ungenerous unless you have more detail to provide.
I know that at least some LLM products explicitly check output for similarity to training data to prevent direct reproduction.
Should they though? If the answer to a question^Wprompt happens to be in the training set, wouldn't it be disingenuous to not provide that?
Should they though? If the answer to a question^Wprompt happens to be in the training set, wouldn't it be disingenuous to not provide that?