LLMs do not output knowledge. They output statistically likely tokens in the form of words or word fragments. That is not knowledge, because LLMs do not know anything, which is why they can tell you two opposing answers to the same question when only one is factual. It’s why they can output something that isn’t at all what you asked for while confirming your instructions crisply. The LLM has no concept of what it’s doing, and you can’t call non-deterministically generated tokens knowledge. You can call them approximations of knowledge, but not knowledge itself.