A sufficiently good simulation of understanding is functionally equivalent to understanding.
At that point, the question of whether the model really does understand is pointless. We might as well argue about whether humans understand.
> A sufficiently good simulation of understanding is functionally equivalent to understanding.
This is just a thing to say that has no substantial meaning.
- What is "sufficiently" mean?
- What is functionally equivalent?
- and what is even understanding?
All just vague hand wavingWe're not philosophizing here, we're talking about practical results and clearly, in the current context, it does not deliver in that area.
> At that point, the question of whether the model really does understand is pointless.
You're right it is pointless, because you are suggesting something that doesnt exist. And the current models cannot understand
thats the point though, its not sufficient. Not even slightly. It constantly makes obvious mistakes, and cannot keep things coherent
I was almost going to explicitly mention your point but deleted it because I thought people would be able to understand.
This is not a philosophy/theology sitting around handwringing about "oh but would a sufficiently powerful LLM be able to dance on the head of a pin". We're talking about a thing, that actually exists, that you can actually test. In a whole lot of real-world scenarios that you try to throw at it, it fails in strange and unpredictable ways. Ways that it will swear up and down it did not do. It will lie to your face. It's convincing. But then it will lose in chess, it will fuck up running a vending machine buisness, it will get lost coding and reinvent the same functions over and over, it will make completely nonsensical answers to crossword puzzles.
This is not an intelligence that is unlimited, it is a deeply flawed two year old that just so happens to have read the entire output of human writing. It's a fundamentally different mind to ours, and makes different mistakes. It sounds convincing and yet fails, constantly. It will tell you a four step explanation of how its going to do something, then fail to execute four simple steps.
In the Catch me if you Can movie, Leo diCaprio’s character wears a surgeon’s gown and confidently says “I concur”.
What I’m hearing here is that you are willing to get your surgery done by him and not by one of the real doctors - if he is capable of pronouncing enough doctor-sounding phrases.