logoalt Hacker News

int_19htoday at 7:25 AM1 replyview on HN

Speaking as someone who thinks the Chinese Room argument is an obvious case of begging the question, GP isn't about that. They're not saying that LLMs don't have world models - they're saying that those world models are not based in physical world and thus cannot properly understand what they talk about.

I don't think that's true anymore, though. All the SOTA models are multimodal now, meaning that they are trained on images and videos as well, not just text; and they do that is precisely because it improves the text output as well, for this exact reason. Already, I don't have to waste time explaining to Claude or Codex what I want on a webpage - I can just sketch a mock-up, or when there's a bug, I take a screenshot and circle the bits that are wrong. But this extends into the ability to reason about real world, as well.


Replies

nosianutoday at 9:51 AM

I would argue that is still just symbols. A physical model requires a lot more. For example, the way babies and toddlers learn is heavy on interaction with objects and the world. We know those who have less of that kind of experience in early childhood will do less well later. We know that many of today's children, kept quiet and sedated with interactive screens, are at a disadvantage. What if you made this even more extreme, a brain without ability to interact with anything, trained entirely passively? Even our much more complex brains have trouble creating a good model in these cases.

You also need more than one simple brain structure simulation repeated a lot. Our brains have many different parts and structures, not just a single type.

However, just like our airplanes do not resemble bird flight as the early dreamers of human flight dreamed of, with flapping wings, I also do not see a need for our technology to fully reproduce the original.

We are better off following our own tech path and seeing where it will lead. It will be something else, and that's fine, because anyone can create a new human brain without education and tools, with just some sex, and let it self-assemble.

Biology is great and all but also pretty limited, extremely path-dependent. Just look at all the materials we already managed to create that nature would never make. Going off the already trodden bio-path should be good, we can create a lot of very different things. Those won't be brains like ours that "Feel" like ours, if that word will ever even apply. and that's fine and good. Our creations should explore entirely new paths. All these comparisons to the human experience make me sad, let's evaluate our products on their own merit.

One important point:

If you truly want a copy, partial or full, in tech, of the human experience, you need to look at the physics. Not at some meta stuff like "text"!!

The physical structure and the electrical signals in the brain. THAT is us. And electrical signals and what they represent in chips are so completely and utterly different from what can be found in the brain, THAT is the much more important argument against silly human "AGI" comparisons. We don't have a CPU and RAM. We have massively parallel waves of electrical signals in a very complex structure.

Humans are hung up on words. We even have fantasy stories hat are all about it. You say some word, magic happens. You know somebody's "true name", you control them.

But the brain works on a much lower deeply physical level. We don't even need language. A human without language and "inner voice" still is a human with the same complex brain, just much worse at communication.

The LLMs are all about the surface layer of that particular human ability though. And again, that is fine, but it has nothing to do with how our brains work. We looked at nature and were inspired, and went and created something else. As always.