logoalt Hacker News

iibtoday at 6:18 AM2 repliesview on HN

I found Geoffrey Hinton's hypothesis of LLMs interesting in this regard. They have to compress the world knowledge into a few billion parameters, much denser than the human brain, so they have to be very good at analogies, in order to obtain that compression.


Replies

TeMPOraLtoday at 8:25 AM

I feel this has causality reversed. I'd say they are good at analogies because they have to compress well, which they do by encoding relationships in stupidly high-dimensional space.

Analogies then could sort of fall out naturally out of this. It might really still be just the simple (yet profound) "King - Man + Woman = Queen" style vector math.

bjt12345today at 6:53 AM

That's essentially the manifold hypothesis of machine learning, right?