Please stop refering to the Chinese Room - it's just magical/deist thinking in disguise. It postulates that humans have way of 'understanding' things that is impossible to replicate mechanically.
The fact that philosophy hasn't recognized and rejected this argument based on this speaks volumes of the quality of arguments accepted there.
(That doesn't mean LLMs are or will be AGI, its just this argument is tautological and meaningless)
> The fact that philosophy hasn't recognized and rejected this argument based on this speaks volumes of the quality of arguments accepted there.
That's one possibility. The other is that your pomposity and dismissiveness towards the entire field of philosophy speaks volumes on how little you know about either philosophical arguments in general or this philosophical argument in particular.
It is still relevant because it hasn’t been disproven yet. So far all computer programs are Chinese Rooms, LLM’s included.
"Please stop referring to this thought experiment because it has possible interpretations I don't personally agree with"
The human way of understanding things can be replicated mechanically, because it is mechanical in nature. The contents of your skull are an existence proof of AGI.
That some people use the Chinese Room to ascribe some magical properties to human consciousness says more about the person drawing that conclusion than the thought experiment itself.
I think it's entirely valid to question whether a computer can form an understanding through deterministically processing instructions, whether that be through programming language or language training data.
If the answer is no, that shouldn't lead to a deist conclusion. It can just as easily lead to the conclusion that a non-deterministic Turing machine is required.