logoalt Hacker News

encyclopedismtoday at 4:14 PM3 repliesview on HN

The correct conclusion to draw and also to reiterate:

LLM's do not think, understand, reason, reflect, comprehend and they never shall.

I have commented elsewhere but this bears repeating

If you had enough paper and ink and the patience to go through it, you could take all the training data and manually step through and train the same model. Then once you have trained the model you could use even more pen and paper to step through the correct prompts to arrive at the answer. All of this would be a completely mechanical process. This really does bear thinking about. It's amazing the results that LLM's are able to acheive. But let's not kid ourselves and start throwing about terms like AGI or emergence just yet. It makes a mechanical process seem magical (as do computers in general).

I should add it also makes sense as to why it would, just look at the volume of human knowledge (the training data). It's the training data with the mass quite literally of mankind's knowledge, genius, logic, inferences, language and intellect that does the heavy lifting.


Replies

zahlmantoday at 4:28 PM

> LLM's do not think, understand, reason, reflect, comprehend and they never shall. ... It's amazing the results that LLM's are able to acheive. ... it also makes sense as to why it would, just look at the volume of human knowledge

Not so much amazing as bewildering that certain results are possible in spite of a lack of thinking etc. I find it highly counterintuitive that simply referencing established knowledge would ever get the correct answer to novel problems, absent any understanding of that knowledge.

show 1 reply
senordevnyctoday at 4:17 PM

I’m curious what your mental model is for how human cognition works. Is it any less mechanical in your view?

show 4 replies