When you have a thought, are you "predicting the next thing"—can you confidently classify all mental activity that you experience as "predicting the next thing"?
Language and society constrains the way we use words, but when you speak, are you "predicting"? Science allows human beings to predict various outcomes with varying degrees of success, but much of our experience of the world does not entail predicting things.
How confident are you that the abstractions "search" and "thinking" as applied to the neurological biological machine called the human brain, nervous system, and sensorium and the machine called an LLM are really equatable? On what do you base your confidence in their equivalence?
Does an equivalence of observable behavior imply an ontological equivalence? How does Heisenberg's famous principle complicate this when we consider the role observer's play in founding their own observations? How much of your confidence is based on biased notions rather than direct evidence?
The critics are right to raise these arguments. Companies with a tremendous amount of power are claiming these tools do more than they are actually capable of and they actively mislead consumers in this manner.
> can you confidently classify all mental activity that you experience as "predicting the next thing"? [...] On what do you base your confidence in their equivalence?
To my understanding, bloaf's claim was only that the ability to predict seems a requirement of acting intentionally and thus that LLMs may "end up being a component in a system which actually does think" - not necessarily that all thought is prediction or that an LLM would be the entire system.
I'd personally go further and claim that correctly generating the next token is already a sufficiently general task to embed pretty much any intellectual capability. To complete `2360 + 8352 * 4 = ` for unseen problems is to be capable of arithmetic, for instance.
Boo LLM-generated comments!
> When you have a thought, are you "predicting the next thing"—can you confidently classify all mental activity that you experience as "predicting the next thing"?
So notice that my original claim was "prediction is fundamental to our ability to act with intent" and now your demand is to prove that "prediction is fundamental to all mental activity."
That's a subtle but dishonest rhetorical shift to make me have to defend a much broader claim, which I have no desire to do.
> Language and society constrains the way we use words, but when you speak, are you "predicting"?
Yes, and necessarily so. One of the main objections that dualists use to argue that our mental processes must be immaterial is this [0]:
* If our mental processes are physical, then there cannot be an ultimate metaphysical truth-of-the-matter about the meaning of those processes.
* If there is no ultimate metaphysical truth-of-the-matter about what those processes mean, then everything they do and produce are similarly devoid of meaning.
* Asserting a non-dualist mind therefore implies your words are meaningless, a self-defeating assertion.
The simple answer to this dualist argument is precisely captured by this concept of prediction. There is no need to assert some kind of underlying magical meaning to be able to communicate. Instead, we need only say that in the relevant circumstances, our minds are capable of predicting what impact words will have on the receiver and choosing them accordingly. Since we humans don't have access to each other's minds, we must not learn these impacts from some kind of psychic mind-to-mind sense, but simply from observing the impacts of the words we choose on other parties; something that LLMs are currently (at least somewhat) capable of observing.
[0] https://www.newdualism.org/papers/E.Feser/Feser-acpq_2013.pd...
If you read the above link you will see that they spell out 3 problems with our understanding of thought:
Consciousness, intentionality, and rationality.
Of these, I believe prediction is only necessary for intentionality, but it does have some roles to play in consciousness and rationality.
> When you have a thought, are you "predicting the next thing"
Yes. This is the core claim of the Free Energy Principle[0], from the most-cited neuroscientist alive. Predictive processing isn't AI hype - it's the dominant theoretical framework in computational neuroscience for ~15 years now.
> much of our experience of the world does not entail predicting things
Introspection isn't evidence about computational architecture. You don't experience your V1 doing edge detection either.
> How confident are you that the abstractions "search" and "thinking"... are really equatable?
This isn't about confidence, it's about whether you're engaging with the actual literature. Active inference[1] argues cognition IS prediction and action in service of minimizing surprise. Disagree if you want, but you're disagreeing with Friston, not OpenAI marketing.
> How does Heisenberg's famous principle complicate this
It doesn't. Quantum uncertainty at subatomic scales has no demonstrated relevance to cognitive architecture. This is vibes.
> Companies... are claiming these tools do more than they are actually capable of
Possibly true! But "is cognition fundamentally predictive" is a question about brains, not LLMs. You've accidentally dismissed mainstream neuroscience while trying to critique AI hype.
[0] https://www.nature.com/articles/nrn2787
[1] https://mitpress.mit.edu/9780262045353/active-inference/