without getting into theory of mind it's a bit difficult to elaborate, and I don't have the time or the will for that. But the short version is that thinking is interconnected with BEING as well as will, and the Agent has neither, in a philosophically formal sense. The agent is deterministically bound. So it is a fancy Rube Goldberg machine that outputs letters in a way that creates the impression of thought, but it is not thought, in the same way that some birds can mimic human speech without even the slightest hint as to the words' or sentences' meaning, underlying grammar, connotations, subtext, context, intended use, likely effect, etc. Is speech speech if the speaker has no concept whatsoever of said speech's content, and can not use it to actualize itself? I'd say no. It's mimicry, but not speech. So that means speech is something more than just its outward aspect - the words. It is the relation of something invisible, some inner experience known only to the speaker, VIA the words.
Whereas a gorilla who learns sign language to communicate and use that communication to achieve aims which have direct correlation with its sense of self - that's thought in the Cogito, Ergo Sum sense of the word.
Thought as commonly concieved by the layman is a sort of isolated phenomenon that is mechanical in nature and can be judged by its outward effects; whereas in the philosophical tradition defining thought is known to be one of the hard questions for its mysterious qualia of being interconnected with will and being as described above.
Guess I gave you the long answer. (though, really, it could be much longer than this.) The Turing Test touches on this distinction between the appearance of thought and actual thought.
The question goes all the way down to metaphysics; some (such as myself) would say that one must be able to define awareness (what some call consciousness - though I think that term is too loaded) before you can define thought. In fact that is at the heart of the western philosophical tradition; and the jury consensus remains elusive after all these thousands of years.
For practical every day uses, does it really matter if it is "real thinking" or just really good "artificial thinking" with the same results? The machine can use artificial thinking to reach desired goals and outcomes, so for me it's the kind of thinking i would want from a machine.
"To escape the paradox, we invoke what we call the “Homunculus Defense”: inside every human is a tiny non-stochastic homunculus that provides true understanding. This homunculus is definitionally not a stochastic parrot because:
1. It has subjective experience (unprovable but assumed)
2. It possesses free will (compatibilist definitions need not apply)
3. It has attended at least one philosophy seminar"[1]
It seems pretty clear to me though that being good at intellectual tasks / the sort of usefulness we ascribe to LLMs doesn't strongly correlate with awareness.
Even just within humans - many of the least intellectually capable humans seem to have a richer supply of the traits associated with awareness/being than some of the allegedly highest-functioning.
On average you're far more likely to get a sincere hug from someone with Down's syndrome than from a multi-millionaire.
But I'm more interested in this when it comes to the animal kingdom, because while ChatGPT is certainly more useful than my cat, I'm also pretty certain that it's a lot less aware. Meaningful awareness - feelings - seems to be an evolutionary adaptation possessed by k-strategy reproducing vertebrates. Having a small number of kids and being biologically wired to care for them has huge implications for your motivation as an animal, and it's reasonable to think that a lot of our higher emotions are built on hardware originally evolved for that purpose.
(Albeit the evolutionary origins of that are somewhat murky - to what extent mammals/birds reuse capabilities that were developed by a much earlier common ancestor, or whether it's entirely parallel evolution, isn't known afaik - but birds seem to exhibit a similar set of emotional states to mammals, that much is true).
The obvious counterargument is that a calculator doesn't experience one-ness, but it still does arithmetic better than most humans.
Most people would accept that being able to work out 686799 x 849367 is a form of thinking, albeit an extremely limited one.
First flight simulators, then chess computers, then go computers, then LLMs are the same principle extended to much higher levels of applicability and complexity.
Thinking in itself doesn't require mysterious qualia. It doesn't require self-awareness. It only requires a successful mapping between an input domain and an output domain. And it can be extended with meta-thinking where a process can make decisions and explore possible solutions in a bounded space - starting with if statements, ending (currently) with agentic feedback loops.
Sentience and self-awareness are completely different problems.
In fact it's likely with LLMs that we have off-loaded some of our cognitive techniques to external hardware. With writing, we off-loaded memory, with computing we off-loaded basic algorithmic operations, and now with LLMs we have off-loaded some basic elements of synthetic exploratory intelligence.
These machines are clearly useful, but so far the only reason they're useful is because they do the symbol crunching, we supply the meaning.
From that point of view, nothing has changed. A calculator doesn't know the meaning of addition, an LLM doesn't need to know the meaning of "You're perfectly right." As long as they juggle symbols in ways we can bring meaning to - the core definition of machine thinking - they're still "thinking machines."
It's possible - I suspect likely - they're only three steps away from mimicking sentience. What's needed is a long-term memory, dynamic training so the model is constantly updated and self-corrected in real time, and inputs from a wide range of physical sensors.
At some point fairly soon robotics and LLMs will converge, and then things will get interesting.
Whether or not they'll have human-like qualia will remain an unknowable problem. They'll behave and "reason" as if they do, and we'll have to decide how to handle that. (Although more likely they'll decide that for us.)