[stub for offtopicness]
(in this case, thinkiness)
I feel like these conversations really miss the mark: whether an LLM thinks or not is not a relevant question. It is a bit like asking “what color is an Xray?” or “what does the number 7 taste like?”
The reason I say this is because an LLM is not a complete self-contained thing if you want to compare it to a human being. It is a building block. Your brain thinks. Your prefrontal cortex however is not a complete system and if you somehow managed to extract it and wire it up to a serial terminal I suspect you’d be pretty disappointed in what it would be capable of on its own.
I want to be clear that I am not making an argument that once we hook up sensory inputs and motion outputs as well as motivations, fears, anxieties, desires, pain and pleasure centers, memory systems, sense of time, balance, fatigue, etc. to an LLM that we would get a thinking feeling conscious being. I suspect it would take something more sophisticated than an LLM. But my point is that even if an LLM was that building block, I don’t think the question of whether it is capable of thought is the right question.
You can replicate all calculations done by LLMs with pen and paper. It would take ages to calculate anything, but it's possible. I don't think that pen and paper will ever "think", regardless of how complex the calculations involved are.
Wouldn't 'thinking' need to be updating the model of reality (LLM is not yet that, just words) - at every step doing again all that extensive calculations as when/to creating/approximating that/better model (learning) ?
Expecting machines to think is.. like magical thinking (but they are good at calculations indeed).
I wish we didn't use the word intelligence in context of LLMs - shortly there is Essence and the rest.. is only slope - into all possible combinations of Markov Chains - may they have sense or not I don't see how part of some calculation could recognize it, or that to be possible from inside (of calculation, that doesn't even consider that).
Aside of artificial knowledge (out of senses, experience, context lengths.. - confabulating but not knowing that), I wish to see an intelligent knowledge - made in kind of semantic way - allowed to expand using not yet obvious (but existing - not random) connections. I wouldn't expect it to think (humans think, digitals calculate). But I would expect it to have a tendency to be coming closer (not further) in reflecting/modeling reality and expanding implications.
- how to prove that humans can argue endlessly like an llm?
- ragebait them by saying AIs don’t think
- …
LLMs don't really think, they emulate their training data. Which has a lot of examples of humans walking through problems to arrive at an answer. So naturally, if we prompt an LLM to do the same, it will emulate those examples (which tend to be more correct).
LLMs are BAD at evaluating earlier thinking errors, precisely because there's not copious examples of text where humans thinking through a problem, screwing up, going back, correcting their earlier statement, and continuing. (a good example catches these and corrects them)
Given that the headline is:
> Secondary school maths showing that AI systems don’t think
And the article contains the quotes:
> the team wants to tackle a major and common misconception: that students think that ANN systems learn, recognise, see, and understand, when really it’s all just maths.
> The team is taking very complex ideas and reducing them to such an extent that we can use secondary classroom maths to show that AI is not magic and AI systems do not think.
This is not off topic
If it comes to the correct answer I don't particularly care how it got there.
A lot of the drama here is due to the ambiguity of what the word 'think' is supposed to mean. One camp associates 'thinking' to consciousness, another does not. I personally believe it is possible to create an animal-like or human-like intelligence, without consciousness existing in the system. I personally would still describe whatever processing that system is doing as 'thinking'. Others believe in "substrate independence"; they think any such system must be consciousness.
(Sneaking a bit of belief in here, to me "substrate independence" is a more extreme position than the idea that a system could be made which is intelligent but not conscious, hence I find it implausible.)
@dang offtopicness started from using word thinking in place of calculating what is the common objection in this thread.
This article doesn’t really show anything near what the title assets.
> the team wants to tackle a major and common misconception: that students think that ANN systems learn, recognise, see, and understand, when really it’s all just maths
This is completely idiotic. Do these people actually believe that showing it can't be actual thought because it is described by math?
<think>Ok, the user is claiming that... </think> ....
Do we think?
By every scientific measure we have the answer is no. It’s just electrical current taking the path of least resistance through connected neurons mixed with cell death.
The fact a human brain peaks at IQ around 200 is fascinating. Can the scale even go higher? It would seem no since nothing has achieved a higher score it must not exist.
I love the idea of educating students on the math behind AI to demystify them. But I think it's a little weird to assert "AI is not magic and AI systems do not think. It’s just maths." Equivalent statements could be made about how human brains are not magic, just biology - yet I think we still think.