I love the idea of educating students on the math behind AI to demystify them. But I think it's a little weird to assert "AI is not magic and AI systems do not think. It’s just maths." Equivalent statements could be made about how human brains are not magic, just biology - yet I think we still think.
It's just provencial nonsense, there's no sound reasoning to it. Reductionism being taken and used as a form of refutation is a pretty common cargo culting behavior I've found.
Overwhelmingly, I just don't think the majority of human beings have the mental toolset to work with ambiguous philosophical contexts. They'll still try though, and what you get out of that is a 4th order baudrillardian simulation of reason.
Thinking is undefined so all statements about it are unverifiable.
>Equivalent statements could be made about how human brains are not magic, just biology - yet I think we still think.
They're not equivalent at all because the AI is by no means biological. "It's just maths" could maybe be applied to humans but this is backed entirely by supposition and would ultimately just be an assumption of its own conclusion - that human brains work on the same underlying principles as AI because it is assumed that they're based on the same underlying principles as AI.
Yeah. This whole AI situation has really exposed how bad most people are at considering the ontological and semantic content of the words they use.
I have yet to hear any plausible definition of "thought" that convincingly places LLMs and brains on opposite sides of it without being obviously contrived for that purpose.
Define "think".
We observe through our senses geometric relationships.
Syntax is exactly that; letters, sentences, paragraphs organized in spatial/geometric relationships.
At best thought is recreation of neural networks in the brain which only exist as spatial relationships.
Our senses operate on spatial relationships; enough light to work by, and food relative to stomach to satisfy our biological impetus to survive (which is spatial relationships of biochemistry).
The idea of "thought" as anything but biology makes little sense to me then as a root source is clearly observable. Humanity, roughly, repeats the same social story. All that thought does not seem to be all that useful as we end up in the same place; the majority as serfs of aristocracy.
Personally would prefer less "thought" role-play and more people taking the load of the labor they exploit to enable them to sit and "think".
A college level approach could look at the line between Math/Science/Physics and Philosophy. One thing from the article that stood out to me was that the introduction to their approach started with a problem about classifying a traffic light. Is it red or green?
But the accompanying XY plot showed samples that overlapped or at least were ambiguous. I immediately lost a lot of my interest in their approach, because traffic lights by design are very clearly red, or green. There aren't mauve or taupe lights that the local populace laughs at and says, "yes, that's mostly red."
I like the idea of studying math by using ML examples. I'm guessing this is a first step and future education will have better examples to learn from.
The human mind is not just biology in the same way that LLMs are just math.
AI systems compute and humans think. One is math and the other biology.
But they are two different things with overlapping qualities.
It's like MDMA and falling in love. They have many overlapping quantities but no one would claim one is the other.
There's a huge amount of money going to convincing people that AI is magic or better than people. The reprogramming is necessary.
That's where these threads always end up. Someone asserts, almost violently, that AI does not and/or cannot "think." When asked how to falsify their assertion, perhaps by explaining what exactly is unique about the human brain that cannot and/or will not be possible to emulate, that's the last anyone ever hears from them. At least until the next "AI can't think" story gets posted.
The same arguments that appeared in 2015 inevitably get trotted out, almost verbatim, ten years later. It would be amusing on other sites, but it's just pathetic here.
I agree saying "they don't think" and leaving it at that isn't particularly useful or insightful, it's like saying "submarines don't swim" and refusing to elaborate further. It can be useful if you extend it to "they don't think like you do". Concepts like finite context windows, or the fact that the model is "frozen" and stateless, or the idea that you can transfer conversations between models are trivial if you know a bit about how LLMs work, but extremely baffling otherwise.