I'll make the following observation:
The contra-positive of "All LLMs are not thinking like humans" is "No humans are thinking like LLMs"
And I do not believe we actually understand human thinking well enough to make that assertion.
Indeed, it is my deep suspicion that we will eventually achieve AGI not by totally abandoning today's LLMs for some other paradigm, but rather embedding them in a loop with the right persistence mechanisms.
The loop, or more precisely the "search" does the novel part in thinking, the brain is just optimizing this process. Evolution could manage with the simplest model - copying with occasional errors, and in one run it made everyone of us. The moral - if you scale search the model can be dumb.