LLMs merely interpolate between the feeble artifacts of thought we call language.
The illusion wears off after about half an hour for even the most casual users. That's better than the old chatbots, but they're still chatbots.
Did anyone ever seriously buy the whole "it's thinking" BS when it was Markov chains? What makes you believe today's LLMs are meaningfully different?
Did anyone ever seriously buy the whole "it's transporting" BS when it was wheelbarrows? What makes you believe today's trucks are meaningfully different?