>Well, a model by itself with data that emits a bunch of human written words is literally no different than what JIRA does when it reads a database table and shits it out to a screen, except maybe a lot more GPU usage.
Obviously, it is different or else we would just use JIRA and a database to replace GPT. Models very obviously do NOT store training data in the weights in the way you are imagining.
>So basically if you have that stance you'd have to agree that when we FIRST invented computers, we created intelligence that is "thinking".
Thinking is by all appearances substrate independent. The moment we created computers, we created another substrate that could, in the future think.
But LLMs are effectively a very complex if/else if tree:
if the user types "hi" respond with "hi" or "bye" or "..." you get the point. It's basically storing the most probably following words (tokens) given the current point and its history.
That's not a brain and it's not thinking. It's similar to JIRA because it's stored information and there are if statements (admins can do this, users can do that).
Yes it is more complex, but it's nowhere near the complexity of the human or bird brain that does not use clocks, does not have "turing machines inside", or any of the other complete junk other people posted in this thread.
The information in Jira is just less complex, but it's in the same vein of the data in an LLM, just 10^100 times more complex. Just because something is complex does not mean it thinks.