Which is why I keep saying that anthropomorphizing LLMs gives you good high-order intuitions about them, and should not be discouraged.
Consider: GP would've been much more correct if they said "It's just a person on a chip." Still wrong, but much less, in qualitative fashion, than they are now.
Just a weird little guy.