logoalt Hacker News

dataviz1000today at 2:17 PM1 replyview on HN

LLM models can only predict the next token.

The can't predict the consequences of an action predicting one token after another. They can't solve a Rubik's Cube unlike a 7 year old human who can learn to do it in a weekend. They can't imagine the perspective of being a human being unlike a 7 year old human if asked to imagine they where in the position of another human.


Replies

DoctorOetkertoday at 2:32 PM

Those are very strong claims, do you really believe an LLM can't be trained to solve Rubik's Cubes?

Can you imagine what if feels like to be a LLM?

Can one LLM have a better sensation of what it feels like to be a different LLM (say one that score a little better?)?

You design circularly defined criteria...

show 1 reply