Everyone is out here acting like "predicting the next thing" is somehow fundamentally irrelevant to "human thinking" and it is simply not the case.
What does it mean to say that we humans act with intent? It means that we have some expectation or prediction about how our actions will effect the next thing, and choose our actions based on how much we like that effect. The ability to predict is fundamental to our ability to act intentionally.
So in my mind: even if you grant all the AI-naysayer's complaints about how LLMs aren't "actually" thinking, you can still believe that they will end up being a component in a system which actually "does" think.
A motorcycle is not "sprinting" and an LLM is not "thinking". Everyone would agree that a motorcycle is not running but the same dumb shit is posted over and over and over on here that somehow the LLM is "thinking".
I suspect that people instinctively believe they have free will, both because it feels like we do, and because society requires us to behave that way even when we don't.
The truth is that the evidence says we don't. See the Libet experiment and its many replications.
Your decisions can be predicted from brain scans up to 10 seconds before you make them, which means they are as deterministic as an LLM's. Sorry, I guess.
> Everyone is out here acting like "predicting the next thing" is somehow fundamentally irrelevant to "human thinking" and it is simply not the case.
Nobody is. What people are doing is claiming that "predicting the next thing" does not define the entirety of human thinking, and something that is ONLY predicting the next thing is not, fundamentally, thinking.
It may be doing the "thinking" and could reach AGI. But we don't want it. We don't want to take a fork lift to the gym. We don't want plastic aliens showing off their AGI and asking humanity to outsource human thinking and decision-making to them.
This is the "but LLMs will get better, trust me" thread?
A good heuristic is that if an argument resorts to "actually not doing <something complex sounding>" or "just doing <something simple sounding>" etc, it is not a rigorous argument.
I'm an LLMs are being used in workflows they don't make sense in-sayer. And while yes, I can believe that LLMs can be part of a system that actually does think, I believe that to achieve true "thinking", it would likely be a system that is more deterministic in its approach rather than probabilistic.
Especially when modeling acting with intent. The ability to measure against past results and think of new innovative approaches seems like it may come from a system that may model first and then use LLM output. Basically something that has a foundation of tools rather than an LLM using MCP. Perhaps using LLMs to generate a response that humans like to read, but not in them coming up with the answer.
Either way, yes, its possible for a thinking system to use LLMs (and potentially humans piece together sentences in a similar way), but its also possible LLMs will be cast aside and a new approach will be used to create an AGI.
So for me: even if you are an AI-yeasayer, you can still believe that they won't be a component in an AGI.
When you have a thought, are you "predicting the next thing"—can you confidently classify all mental activity that you experience as "predicting the next thing"?
Language and society constrains the way we use words, but when you speak, are you "predicting"? Science allows human beings to predict various outcomes with varying degrees of success, but much of our experience of the world does not entail predicting things.
How confident are you that the abstractions "search" and "thinking" as applied to the neurological biological machine called the human brain, nervous system, and sensorium and the machine called an LLM are really equatable? On what do you base your confidence in their equivalence?
Does an equivalence of observable behavior imply an ontological equivalence? How does Heisenberg's famous principle complicate this when we consider the role observer's play in founding their own observations? How much of your confidence is based on biased notions rather than direct evidence?
The critics are right to raise these arguments. Companies with a tremendous amount of power are claiming these tools do more than they are actually capable of and they actively mislead consumers in this manner.
The issue is that prediction is "part" of the human thought process, it's not the full story...
most humans in any percentile act towards the thing of someone else. most of these things are a lot worse than what the human "would originally intend". this behavior stems from 100s and thousands of nudges since childhood.
the issue with AI and AI-naysayers is, by analogy, this: cars were build to drive from A to Z. people picked up tastes and some people started building really cool looking cars. the same happens on the engineering side. then portfolio communists came with their fake capitalism and now cars are build to drive over people but don't really work because people, thankfully, are overwhelming still fighting to attempt to act towards their own intents.
Exactly. Our base learning is by example, which is very much learning to predict.
Predict the right words, predict the answer, predict when the ball bounces, etc. Then reversing predictions that we have learned. I.e. choosing the action with the highest prediction of the outcome we want. Whether that is one step, or a series of predicted best steps.
Also, people confuse different levels of algorithm.
There are at least 4 levels of algorithm:
• 1 - The architecture.
This input-output calculation for pre-trained models are very well understood. We put together a model consisting of matrix/tensor operations and few other simple functions, and that is the model. Just a normal but high parameter calculation.
• 2 - The training algorithm.
These are completely understood.
There are certainly lots of questions about what is most efficient, alternatives, etc. But training algorithms harnessing gradients and similar feedback are very clearly defined.
• 3 - The type of problem a model is trained on.
Many basic problem forms are well understood. For instance, for prediction we have an ordered series of information, with later information to be predicted from earlier information. It could simply be an input and response that is learned. Or a long series of information.
• 4 - The solution learned to solve (3) the outer problem, using (2) the training algorithm on (1) the model architecture.
People keep confusing (4) with (1), (2) or (3). But it is very different.
For starters, in the general case, and for most any challenging problem, we never understand their solution. Someday it might be routine, but today we don't even know how to approach that for any significant problem.
Secondly, even with (1), (2), and (3) exactly the same, (4) is going to be wildly different based on the data characterizing the specific problem to solve. For complex problems, like language, layers and layers of sub-solutions to sub-problems have to be solved, and since models are not infinite in size, ways to repurpose sub-solutions, and weave together sub-solutions to address all the ways different sub-problems do and don't share commonalities.
Yes, prediction is the outer form of their solution. But to do that they have to learn all the relationships in the data. And there is no limit to how complex relationships in data can be. So there is no limit on the depths or complexity of the solutions found by successfully trained models.
Any argument they don't reason, based on the fact that they are being trained to predict, confuses at least (3) and (4). That is a category error.
It is true, they reason a lot more like our "fast thinking", intuitive responses, than our careful deep and reflective reasoning. And they are missing important functions, like a sense of what they know or don't. They don't continuously learn while inferencing. Or experience meta-learning, where they improve on their own reasoning abilities with reflection, like we do. And notoriously, by design, they don't "see" the letters that spell words in any normal sense. They see tokens.
Those reasoning limitations can be irritating or humorous. Like when a model seems to clearly recognize a failure you point out, but then replicates the same error over and over. No ability to learn on the spot. But they do reason.
Today, despite many successful models, nobody understands how models are able to reason like they do. There is shallow analysis. The weights are there to experiment with. But nobody can walk away from the model and training process, and build a language model directly themselves. We have no idea how to independently replicate what they have learned, despite having their solution right in front of us. Other than going through the whole process of retraining another one.
LLMs merely interpolate between the feeble artifacts of thought we call language.
The illusion wears off after about half an hour for even the most casual users. That's better than the old chatbots, but they're still chatbots.
Did anyone ever seriously buy the whole "it's thinking" BS when it was Markov chains? What makes you believe today's LLMs are meaningfully different?
Are you a stream of words or are your words the “simplistic” projection of your abstract thoughts? I don’t at all discount the importance of language in so many things, but the question that matters is whether statistical models of language can ever “learn” abstract thought, or become part of a system which uses them as a tool.
My personal assessment is that LLMs can do neither.