The debate around whether or not transformer-architecture-based AIs can "think" or not is so exhausting and I'm over it.
What's much more interesting is the question of "If what LLMs do today isn't actual thinking, what is something that only an actually thinking entity can do that LLMs can't?". Otherwise we go in endless circles about language and meaning of words instead of discussing practical, demonstrable capabilities.
Without going to look up the exact quote, I remember an AI researcher years (decades) ago saying something to the effect of, Biologists look at living creatures and wonder how they can be alive; astronomers look at the cosmos and wonder what else is out there; those of us in artificial intelligence look at computer systems and wonder how they can be made to wonder such things.
Don't be sycophantic. Disagree and push back when appropriate.
Come up with original thought and original ideas.
Have long term goals that aren't programmed by an external source.
Do something unprompted.
The last one IMO is more complex than the rest, because LLMs are fundamentally autocomplete machines. But what happens if you don't give them any prompt? Can they spontaneously come up with something, anything, without any external input?
> "If what LLMs do today isn't actual thinking, what is something that only an actually thinking entity can do that LLMs can't?"
Independent frontier maths research, i.e. coming up with and proving (preferably numerous) significant new theorems without human guidance.
I say that not because I think the task is special among human behaviours. I think the mental faculties that mathematicians use to do such research are qualitatively the same ones all humans use in a wide range of behaviours that AI struggles to emulate.
I say it because it's both achievable (in principle, if LLMs can indeed think like humans) and verifiable. Achievable because it can be viewed as a pure text generation task and verifiable because we have well-established, robust ways of establishing the veracity, novelty and significance of mathematical claims.
It needs to be frontier research maths because that requires genuinely novel insights. I don't consider tasks like IMO questions a substitute as they involve extremely well trodden areas of maths so the possibility of an answer being reachable without new insight (by interpolating/recombining from vast training data) can't be excluded.
If this happens I will change my view on whether LLMs think like humans. Currently I don't think they do.
> Otherwise we go in endless circles about language and meaning of words
We understand thinking as being some kind of process. The problem is that we don't understand the exact process, so when we have these discussions the question is if LLMs are using the same process or an entirely different process.
> instead of discussing practical, demonstrable capabilities.
This doesn't resolve anything as you can reach the same outcome using a different process. It is quite possible that LLMs can do everything a thinking entity can do all without thinking. Or maybe they actually are thinking. We don't know — but many would like to know.
solve simple maths problems, for example the kind found in the game 4=10 [1]
Doesn't necessarily have to reliably solve them, some of them are quite difficult, but llms are just comically bad at this kind of thing.
Any kind of novel-ish(can't just find the answers in the training-data) logic puzzle like this is, in my opinion, a fairly good benchmark for "thinking".
Until a llm can compete with a 10 year old child in this kind of task, I'd argue that it's not yet "thinking". A thinking computer ought to be at least that good at maths after all.
[1] https://play.google.com/store/apps/details?id=app.fourequals...
Have needs and feelings? (I mean we can’t KNOW that they don’t and we know of this case of an LLM in experiment that try to avoid being shutdown, but I think the evidence of feeling seems weak so far)
Ya, the fact this was published on November 3, 2025 is pretty hilarious. This was last year's debate.
I think the best avenue toward actually answering your questions starts with OpenWorm [1]. I helped out in a Connectomics research lab in college. The technological and epistemic hurdles are pretty daunting, but so were those for Genomics last century, and now full-genome sequencing is cheap and our understanding of various genes is improving at an accelerating pace. If we can "just" accurately simulate a natural mammalian brain on a molecular level using supercomputers, I think people would finally agree that we've achieved a truly thinking machine.
> what is something that only an actually thinking entity can do that LLMs can't?
This is pretty much exactly what https://arcprize.org/arc-agi is working on.
What people are interested in is finding a definition for intelligence, that is an exact boundary.
That's why we first considered tool use, being able to plan ahead as intelligence, until we have found that these are not all that rare in the animal kingdom in some shape. Then with the advent of IT what we imagined as impossible turned out to be feasible to solve, while what we though of as easy (e.g. robot movements - a "dumb animal" can move trivially it surely is not hard) turned out to require many decades until we could somewhat imitate.
So the goal post moving of what AI is is.. not moving the goal post. It's not hard to state trivial higher bounds that differentiates human intelligence from anything known to us, like invention of the atomic bomb. LLMs are nowhere near that kind of invention and reasoning capabilities.
> "If what LLMs do today isn't actual thinking, what is something that only an actually thinking entity can do that LLMs can't?"
Invent some novel concept, much the same way scientists and mathematicians of the distant past did? I doubt Newton's brain was simply churning out a stream of the "next statistically probable token" until -- boom! Calculus. There was clearly a higher order understanding of many abstract concepts, intuition, and random thoughts that occurred in his brain in order to produce something entirely new.
Strive for independence.
> That is something that only an actually thinking entity can do that LLMs can't?
Training != Learning.
If a new physics breakthrough happens tomorrow, one that say lets us have FTL, how is an LLM going to acquire the knowledge, how does that differ from you.
The break through paper alone isnt going to be enough to over ride its foundational knowledge in a new training run. You would need enough source documents and a clear path deprecate the old ones...
The issue is that we have no means of discussing equality without tossing out the first order logic that most people are accustomed to. Human equality and our own perceptions of other humans as thinking machines is an axiomatic assumption that humans make due to our mind's inner sense perception.
Form ideas without the use of language.
For example: imagining how you would organize a cluttered room.
operate on this child
"The question of whether a computer can think is no more interesting than the question of whether a submarine can swim." - Edsger Dijkstra