The number of people willing to launch into debates about whether LLMs are thinking, intelligent, conscious, etc, without actually defining those terms, never ceases to amaze me.
I'm not sure that "thinking", unlike intelligence, is even that interesting of a concept. It's basically just reasoning/planning (i.e. chained what-if prediction). Sometimes you're reasoning/planning (thinking) what to say, and other times just reasoning/planning to yourself (based on an internal vs external focus).
Of course one can always CHOOSE to make analogies between any two things, in this case the mechanics of what's going on internal to an LLM and a brain, but I'm not sure it's very useful in this case. Using anthropomorphic language to describe LLMs seems more likely to confuse rather than provide any insight, especially since they are built with the sole function of mimicking humans, so you are basically gaslighting yourself if you regard them as actually human-like.