No, it does not reason anything. LLM "reasoning" is just an illusion.
When an LLM is "reasoning" it's just feeding its own output back into itself and giving it another go.
Is that so different from brains?
Even if it is, this sounds like "this submarine doesn't actually swim" reasoning.
This is like saying chess engines don't actually "play" chess, even though they trounce grandmasters. It's a meaningless distinction, about words (think, reason, ..) that have no firm definitions.