It's Plato's cave:
We train the models on what are basically shadows, and they learn how to pattern match the shadows.
But the shadows are only depictions of the real world, and the LLMs never learn about that.
But the same is true for human, we get our information though our senses we do not have the __real__ word directly.
100%
But the same is true for human, we get our information though our senses we do not have the __real__ word directly.