logoalt Hacker News

madaxe_againyesterday at 7:21 AM2 repliesview on HN

The vast majority of human “thinking” is autocompletion.

Any thinking that happens with words is fundamentally no different to what LLMs do, and everything you say applies to human lexical reasoning.

One plus one equals two. Do you have a concept of one-ness, or two-ness, beyond symbolic assignment? Does a cashier possess number theory? Or are these just syntactical stochastic rules?

I think the problem here is the definition of “thinking”.

You can point to non-verbal models, like vision models - but again, these aren’t hugely different from how we parse non-lexical information.


Replies

gloosxyesterday at 8:54 AM

> Any thinking that happens with words is fundamentally no different from what LLMs do.

This is such a wildly simplified and naive claim. "Thinking with words" happens inside a brain, not inside a silicon circuit with artificial neurons bolted in place. The brain is plastic, it is never the same from one moment to the next. It does not require structured input, labeled data, or predefined objectives in order to learn "thinking with words." The brain performs continuous, unsupervised learning from chaotic sensory input to do what it does. Its complexity and efficiency are orders of magnitude beyond that of LLM inference. Current models barely scratch the surface of that level of complexity and efficiency.

> Do you have a concept of one-ness, or two-ness, beyond symbolic assignment?

Obviously we do. The human brain's idea of "one-ness" or "two-ness" is grounded in sensory experience — seeing one object, then two, and abstracting the difference. That grounding gives meaning to the symbol, something LLMs don't have.

show 2 replies
notepad0x90yesterday at 1:52 PM

We do a lot of autocompletion and LLMs overlap with that for sure. I don't know about the "vast majority" even basic operations like making sure we're breathing or have the right hormones prompted are not guesses but deterministic algorithmic ops. Things like object recognition and speech might qualify as autocompletion. But let's say you need to setup health-monitoring for an application. that's not an autocomplete operation. you must evaluate various options, have opinions on it, weigh priorities,etc.. in other words, we do autocompletion but even then the autocompletion is a basic building block or tool we use in constructing more complex decision logic.

If you train an animal to type the right keys on a keyboard that generates a hello world program, you didn't just teach them how to code. they just memorized the right keys that lead to their reward. a human programmer understands the components of the code, the intent and expectations behind it, and can reason about how changes would affect outcomes. the animal just knows how the reward can be obtained most reliably.