I think you've missed the concept here.
You exist in the full experience. That lossy projection to words is still meaningful to you, in your reading, because you know the experience it's referencing. What do I mean by "lossy projection"? It's the experience of seeing the color blue to the word "blue". The word "blue" is meaningless without already having experienced it, because the word is not a description of the experience, it's a label. The experience itself can't be sufficiently described, as you'll find if you try to explain a "blue" to a blind person, because it exists outside of words.
The concept here is that something like an LLM, trained on human text, can't having meaningful comprehension of some concepts, because some words are labels of things that exist entirely outside of text.
You might say "but multimodal models use tokens for color!", or even extending that to "you could replace the tokens used in multimodal models with color names!" and I would agree. But, the understanding wouldn't come from the relation of words in human text, it would come from the positional relation of colors across a space, which is not much different than our experience of the color, on our retina
tldr: to get AI to meaningful understand something, you have to give it a meaningful relation. Meaningful relations sometimes aren't present, in human writing.