Current LLMs prove that the Turing Test was insufficient all along. But they also prove that intelligence != consciousness. One can, after all, be conscious without a thought in one's head. We certainly have ongoing work in identifying the neural correlates of consciousness in animals, none of which is going to be remotely applicable to machines. We're genuinely blind to the question of whether a sufficiently large neural net can exhibit flashes of subjective experience.
Obligatory Blightsight recommendation for intelligence != consciousness.
That was one of my thoughts years ago after playing with early ChatGPT and local llama1: this proves that intelligence and consciousness do not necessitate one another and may not even be directly related.
I’ve kind of thought this for many years though. A bacterium and a tree are probably conscious. I think it’s a property of life rather than brains. Our brains are conscious because they are alive. They are also intelligent.
The consciousness of a bacterium or a tree might be radically unlike ours. It might not have a sense of self in the same way we do, or experience time the same way, but it probably has some form of experience of existing.
Wrong based on what criteria? Or are we just moving the goal post because we are uncomfortable with the idea that neural networks might be conscious?
If a single cell organism moves towards light and away from a rock, we say it’s aware. When a roomba vacuum does the same we try to create alternate explanations. Why? Based on the criteria applied to one it’s aware. If there is some other criteria, say we find out the roomba doesn’t sense the wall but has a map of the room and is using GPS and a programmed route, then the criteria of “no fixed programs that relate to data outside of the system, would justify saying the roomba isn’t “aware”.
> But they also prove that intelligence != consciousness.
They prove no such thing. We can't even prove consciousness in other humans.
https://en.wikipedia.org/wiki/Problem_of_other_minds