People who say that LLMs memorize stuff are just as clueless who assume that there's any reasoning happening.
They generate statistically plausible answers (to simplify the answer) based on the training set and weights they have.
What if that’s all we’re doing, though?
What if that’s all we’re doing, though?