> A sufficiently good simulation of understanding is functionally equivalent to understanding.
This is just a thing to say that has no substantial meaning.
- What is "sufficiently" mean?
- What is functionally equivalent?
- and what is even understanding?
All just vague hand wavingWe're not philosophizing here, we're talking about practical results and clearly, in the current context, it does not deliver in that area.
> At that point, the question of whether the model really does understand is pointless.
You're right it is pointless, because you are suggesting something that doesnt exist. And the current models cannot understand
The current models obviously understand a lot. They would easily understand your comment, for example, and give an intelligent answer in response. The whole "the current models cannot understand" mantra is more religious than anything.
>We're not philosophizing here, we're talking about practical results and clearly, in the current context, it does not deliver in that area.
Except it clearly does, in a lot of areas. You can't take a 'practical results trump all' stance and come out of it saying LLMs understand nothing. They understand a lot of things just fine.