> Because they don't _understand_ things. If I teach an LLM that 3+5 is 8, it doesn't "get" that 4+5 is 9 (leave aside the details here, as I'm explaining for effect). It needs to be taught that as well, and so on. We understand exactly everything that goes into how LLMs generate answers.
What you're saying just isn't true, even directionally. Deployed LLMs routinely generalize outside of their training set to apply patterns they learned within the training set. How else, for example, could LLMs be capable of summarizing new text they didn't see in training?