You should take your complaints to OpenAI, who constantly write like LLMs think in the exact same sense as humans; here a random example:
> Large language models (LLMs) can be dishonest when reporting on their actions and beliefs -- for example, they may overstate their confidence in factual claims or cover up evidence of covert actions
They have a product to sell based on the idea AGI is right around the corner. You can’t trust Sam Altman as far as you can throw him.
Still, the sales pitch has worked to unlock huge liquidity for him so there’s that.
Still making predictions is a big part of what brains do though not the only thing. Someone wise said that LLM intelligence is a new kind of intelligence, like how animal intelligence is different from ours but is still intelligence but needs to be characterized to understand differences.