The obvious counterargument is that a calculator doesn't experience one-ness, but it still does arithmetic better than most humans.
Most people would accept that being able to work out 686799 x 849367 is a form of thinking, albeit an extremely limited one.
First flight simulators, then chess computers, then go computers, then LLMs are the same principle extended to much higher levels of applicability and complexity.
Thinking in itself doesn't require mysterious qualia. It doesn't require self-awareness. It only requires a successful mapping between an input domain and an output domain. And it can be extended with meta-thinking where a process can make decisions and explore possible solutions in a bounded space - starting with if statements, ending (currently) with agentic feedback loops.
Sentience and self-awareness are completely different problems.
In fact it's likely with LLMs that we have off-loaded some of our cognitive techniques to external hardware. With writing, we off-loaded memory, with computing we off-loaded basic algorithmic operations, and now with LLMs we have off-loaded some basic elements of synthetic exploratory intelligence.
These machines are clearly useful, but so far the only reason they're useful is because they do the symbol crunching, we supply the meaning.
From that point of view, nothing has changed. A calculator doesn't know the meaning of addition, an LLM doesn't need to know the meaning of "You're perfectly right." As long as they juggle symbols in ways we can bring meaning to - the core definition of machine thinking - they're still "thinking machines."
It's possible - I suspect likely - they're only three steps away from mimicking sentience. What's needed is a long-term memory, dynamic training so the model is constantly updated and self-corrected in real time, and inputs from a wide range of physical sensors.
At some point fairly soon robotics and LLMs will converge, and then things will get interesting.
Whether or not they'll have human-like qualia will remain an unknowable problem. They'll behave and "reason" as if they do, and we'll have to decide how to handle that. (Although more likely they'll decide that for us.)
So if you don’t have a long term memory, you’re not capable of sentience? Like the movie memento, where the main character needs to write down everything to remind him later because he’s not able to remember anything. This is pretty much like llms using markdown documents to remember things.
Some of your points are lucid, some are not. For example, an LLM does not "work out" any kind of math equation using anything approaching reasoning; rather it returns a string that is "most likely" to be correct using probability based on its training. Depending on the training data and the question being asked, that output could be accurate or absurd.
That's not of the same nature as reasoning your way to an answer.