I mean at some point what difference does this make? We can split hairs about whether it 'really understands' the thing, and maybe that's an interesting side-topic to talk about on these forums, but the behavior and outputs of the model is what really matters to everyone else right?
Maybe it doesn't 'understand' in the experiential, qualia way that a human does. Sure. But it's still a valid and useful simile to use with these models because they emulate something close enough to understanding; so much so now that when they stop doing it, that's the point of conversation, not the other way around.
When people talk about an LLM “not understanding” you’re apparently taking it to be similar to someone saying a fish doesn’t “understand” the concept of captivity, or a dog doesn’t “understand” playing fetch. Like the person is somehow narrowly defining it based on their own belief system and, like, dude, what is consciousness anyway?
That’s not what’s happening. When it’s said that an LLM doesn’t understand it’s meant in the “calculator doesn’t understand taxes” or “pachinko machine doesn’t understand probability” way. The conversation itself is silly.