logoalt Hacker News

Marha01yesterday at 4:46 PM1 replyview on HN

It is magical thinking to claim that LLMs are definitely physically incapable of thinking. You don't know that. No one knows that, since such large neural networks are opaque blackboxes that resist interpretation and we don't really know how they function internally.

You are just repeating that because you read that before somewhere else. Like a stochastic parrot. Quite ironic. ;)


Replies

tovejyesterday at 8:20 PM

They really aren't that mysterious. We can confidently say that they function at the lexical level, using Monte Carlo principles to carve out a likely path in lexical space. The output depends on the distribution of n-grams in the training set, and the composition of the text in it's context window.

This process cannot produce reasoning.

1) an LLM cannot represent the truth value of statements, only their likelihood of being found in its training data.

2) because it uses lexical data, an LLM will answer differently based on the names / terms used in a prompt.

Both of these facts contradict the idea that the LLM is reasoning, or "thinking".

This isn't really a very hit take either, I don't think I've talked to a single researcher who thinks that LLMs are thinking.