logoalt Hacker News

TeMPOraL01/16/20261 replyview on HN

All true, but note I didn't make any claims on internal mechanics of LLMs here - only on the observable, external ones, and the nature of the training process.

Do consider however that even the "formal part that uses words" of human communication, i.e. language, is strongly correlated with our experience of the real world. Things people write aren't arbitrary. Languages aren't arbitrary. The words we use, their structure, similarities across languages and topics, turns of phrases, the things we say and the things we don't say, even the greatest lies, they all carry information about the world we live in. It's not unreasonable to expect the training process as broad and intense as with LLMs to pick up on that.

I said nothing about internals earlier, but I'll say now: LLMs do actually form a "deep mofel of the real world", at least in terms of concepts and abstractions. That has already been empirically demonstrated ~2 years ago, there's e.g. research done by Anthropic where they literally find distinct concepts within the neural network, observe their relationships, and even suppress and amplify them on demand. So that ship has already sailed, it's surprising to see people still think LLMs don't do concepts or don't have internal world models.


Replies

nosianu01/19/2026

> but note I didn't make any claims on internal mechanics of LLMs here

Great - neither did I!

Not a single word about any internals anywhere in sight in my comment!!