It's unfortunate that there's so little (none in the article, just 1 comment here as of this writing) mention of the Turing Test. The whole premise of the paper that introduced that was that "do machines think" is such a hard question to define that you have to frame the question differently. And it's ironic that we seem to talk about the Turing Test less than ever now that systems almost everyone can access can arguably pass it now.
>And it's ironic that we seem to talk about the Turing Test less than ever now that systems almost everyone can access can arguably pass it now.
Has everyone hastily agreed that it has been passed? Do people argue that a human can't figure out it's talking to an LLM if the user is aware that LLMs exist in the world and is aware of their limitations and that the chat log is able to extend to infinity ( "infinity" is a proxy here for any sufficient time, it could be minutes, days, months, or years)?
In fact, it is blindly easy for these systems to fail the Turing test at the moment. No human would have the patience to continue a conversation indefinitely without telling the person on the other side to kindly fuck off.
> “The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.” ~ Edsger W. Dijkstra
The point of the Turing Test is that if there is no extrinsic difference between a human and a machine the intrinsic difference is moot for practical purposes. That is not an argument to whether a machine (with linear algebra, machine learning, large language models, or any other method) can think or what constitutes thinking or consciousness.
The Chinese Room thought experiment is a compliment on the intrinsic side of the comparison: https://en.wikipedia.org/wiki/Chinese_room