Well, I think because we know how the code is written, in the sense that humans quite literally wrote the code for it - it's definitely not thinking, and it is literally doing what we asked, based on the data we gave it. It is specifically executing code we thought of. The output of course, we had no flying idea it would work this well.
But it is not sentient. It has no idea of a self or anything like that. If it makes people believe that it does, it is because we have written so much lore about it in the training data.
Well, unless you believe in some spiritual, non-physical aspect of consciousness, we could probably agree that human intelligence is Turing-complete (with a slightly sloppy use of terms).
So any other Turing-complete model can emulate it, including a computer. We can even randomly generate Turing machines, as they are just data. Now imagine we are extremely lucky and happen to end up with a super-intelligent program which through the mediums it can communicate (it could be simply text-based but a 2D video with audio is no different for my perspective) can't be differentiated from a human being.
Would you consider it sentient?
Now replace the random generation with, say, a back propagation algorithm. If it's sufficiently large, don't you think it's indifferent from the former case - that is, novel qualities could emerge?
With that said, I don't think that current LLMs are anywhere close to this category, but I just don't think this your reasoning is sound.
It's not accurate to say we "wrote the code for it". AI isn't built like normal software. Nowhere inside an AI will you find lines of code that say If X Then Y, and so on.
Rather, these models are literally grown during the training phase. And all the intelligence emerges from that growth. That's what makes them a black box and extremely difficult to penetrate. No one can say exactly how they work inside for a given problem.
Now convince us that you’re sentient and not just regurgitating what you’ve heard and seen in your life.
This is probably true. But the truth is we have absolutely no idea what sentience is and what gives rise to it. We cannot identify why humans have it rather than just being complex biological machines, or whether and why other animals do. We have no idea what the rules or, nevermind how and why they would or wouldn't apply to AI.
What’s crazy to me is the mechanism of pleasure or pain. I can understand that with enough complexity we can give rise to sentience but what does it take to achieve sensation?
> But it is not sentient. It has no idea of a self or anything like that.
Who stated that sentience or sense of self is a part of thinking?
Unless the idea of us having a thinking self is just something that comes out of our mouth, an artifact of language. In which case we are not that different - in the end we all came from mere atoms, after all!
Your brain is just following the laws of chemistry. So where is your thinking found in a bunch of chemical reactions?
We do not write the code that makes it do what it does. We write the code that trains it to figure out how to do what it does. There's a big difference.