logoalt Hacker News

veqzlast Monday at 8:07 PM2 repliesview on HN

It's Plato's cave:

We train the models on what are basically shadows, and they learn how to pattern match the shadows.

But the shadows are only depictions of the real world, and the LLMs never learn about that.


Replies

ebonnafouxlast Tuesday at 6:34 AM

But the same is true for human, we get our information though our senses we do not have the __real__ word directly.

show 1 reply
EternalFurylast Monday at 8:50 PM

100%