One thing that struck me recently is that LLMs are necessarily limited by what's expressible with existing language. How can this ever result in AGI? A lot of human progress required inventing new language to represent new ideas and concepts. An LLM only experience of the world is what can be expressed with words. Meanwhile, even a squirrel has an intuitive understanding of the laws of gravity that are beyond the ability of an LLM to ever experience because it's stuck in a purely conceptual, silicon prison.
I don't know why you would think that the model can't create new language. That is a trivial activity. For example, I asked GPT5 to read the news and make a new word.
Wattlash /ˈwɒt-læʃ/
n. The fast, localized backlash that erupts when AI-era data centers spike electricity demand—triggering grid constraints, siting moratoriums, bill-shock fears, and, paradoxically, a rush into fixes like demand-response deals, waste-heat reuse, and nuclear/fusion PPAs.
They experience the world through tokens, which can contain more information than just words. Images can be tokenized, so can sounds, pressure sensors, etc.
> Meanwhile, even a squirrel has an intuitive understanding of the laws of gravity that are beyond the ability of an LLM to ever experience because it's stuck in a purely conceptual, silicon prison.
I just don't think that's true. People used to say this kind of thing about computer vision - a computer can't really see things, only compute formulas on pixels, and "does this picture contain a dog" obviously isn't a mathematical formula. Turns out it is!