I don't agree that LLMs can't reason reliably. If you give them a simple reasoning question, they can generally make a decent attempt at coming up with a solution. Complete howlers are rare from cutting-edge models. (If you disagree, give an example!)
Humans sometimes make mistakes in reasoning, too; sometimes they come up with conclusions that leave me completely bewildered (like somehow reasoning that the Earth is flat).
I think we can all agree that humans are significantly better and more consistently good at reasoning than even the best LLM models, but the argument that LLMs cannot reliably reason doesn't seem to match the evidence.