It is hard to define reasoning or thinking, these are vague concepts. I use them to indicate there are areas where these machines take obviously wrong decisions, because they are above all probability weighing machines based on a corpus, that is not I hope you would agree thinking, so you must believe there is some emergent properties which constitute thinking since you're so confident these machines are in fact doing that.
AI companies use these terms (thinking, reasoning etc) to try to trick users into anthropomorphising pattern matching machines and so that people believe they are true general intelligence.
I don't think we've reached AGI yet, though we are closer than previously, and I'm skeptical LLMs will be the route - they are impressive, but they are better at tricking humans than at performing complex tasks they have not seen before IME.
Do you think we have seen AGI yet from LLMs? If not how would you define their limitations?