logoalt Hacker News

pessimizeryesterday at 6:16 PM1 replyview on HN

People have an actual world model, though, that they have to deal with in order to get the food into their mouths or to hit the toilet properly.

The "facts" that they believe that may be nonsense are part of an abstract world model that is far from their experience, for which they never get proper feedback (such as the political situation in Bhutan, or how their best friend is feeling.) In those, it isn't surprising that they perform like an LLM, because they're extracting all of the information from language that they've ingested.

Interestingly, the feedback that people use to adjust the language-extracted portions of their world models is how demonstrating their understanding of those models seems to please or displease the people around them, who in turn respond in physically confirmable ways. What irritates people about simpering LLMs is that they're not doing this properly. They should be testing their knowledge with us (especially their knowledge of our intentions or goals), and have some fear of failure. They have no fear and take no risk; they're stateless and empty.

Human abstractions are based in the reality of the physical responses of the people around them. The facts of those responses are true and valid results of the articulation of these abstractions. The content is irrelevant; when there's no opportunity to act, we're just acting as carriers.


Replies

jacquesmyesterday at 6:32 PM

> Human abstractions are based in the reality of the physical responses of the people around them.

And in the physical responses of the world around them. That empiricism is the foundation of all of science and if you throw that out the end result is gibberish.

show 1 reply