logoalt Hacker News

adamtaylor_13yesterday at 1:53 PM3 repliesview on HN

If LLMs just regurgitate compressed text, they'd fail on any novel problem not in their training data. Yet, they routinely solve them, which means whatever's happening between input and output is more than retrieval, and calling it "not understanding" requires you to define understanding in a way that conveniently excludes everything except biological brains.


Replies

sfn42yesterday at 3:43 PM

Yes there are some fascinating emergent properties at play, but when they fail it's blatantly obvious that there's no actual intelligence nor understanding. They are very cool and very useful tools, I use them on a daily basis now and the way I can just paste a vague screenshot with some vague text and they get it and give a useful response blows my mind every time. But it's very clear that it's all just smoke and mirrors, they're not intelligent and you can't trust them with anything.

show 2 replies
varispeedyesterday at 2:03 PM

They don't solve novel problems. But if you have such strong belief, please give us examples.

show 1 reply
otabdeveloper4yesterday at 7:47 PM

> they'd fail on any novel problem not in their training data

Yes, and that's exactly what they do.

No, none of the problems you gave to the LLM while toying around with them are in any way novel.

show 1 reply