logoalt Hacker News

gloosxyesterday at 8:54 AM2 repliesview on HN

> Any thinking that happens with words is fundamentally no different from what LLMs do.

This is such a wildly simplified and naive claim. "Thinking with words" happens inside a brain, not inside a silicon circuit with artificial neurons bolted in place. The brain is plastic, it is never the same from one moment to the next. It does not require structured input, labeled data, or predefined objectives in order to learn "thinking with words." The brain performs continuous, unsupervised learning from chaotic sensory input to do what it does. Its complexity and efficiency are orders of magnitude beyond that of LLM inference. Current models barely scratch the surface of that level of complexity and efficiency.

> Do you have a concept of one-ness, or two-ness, beyond symbolic assignment?

Obviously we do. The human brain's idea of "one-ness" or "two-ness" is grounded in sensory experience — seeing one object, then two, and abstracting the difference. That grounding gives meaning to the symbol, something LLMs don't have.


Replies

gkbrkyesterday at 10:31 AM

LLMs are increasingly trained on images for multi-modal learning, so they too would have seen one object, then two.

show 1 reply
madaxe_againtoday at 7:08 AM

The instantiation of models in humans is not unsupervised, and language, for instance, absolutely requires labelled data and structured input. The predefined objective is “expand”.

See also: feral children.

show 1 reply