> because we have no idea how it works
Flagrantly, ridiculously untrue. We don't know the precise nuts and bolts regarding the emergence of consciousness and the ability to reason, that's fair, but different structures of the brain have been directly linked to different functions and have been observed in operation on patients being stimulated in various ways with machinery attached to them reading levels of neuro-activity in the brain, and in specific regions. We know which parts handle our visual acuity and sense of hearing, and even cooler, we can watch those same regions light up when we use our "minds eye" to imagine things or engage in self-talk, completely silent speech that nevertheless engages our verbal center, which is also engaged by the act of handwriting and typing.
In short: no, we don't have the WHOLE answer. But to say that we have no idea is categorically ridiculous.
As to the notion of LLMs doing similarly: no. They are trained on millions of texts of various sources of humans doing thinking aloud, and that is what you're seeing: a probabilistic read of millions if not billions of documents, written by humans, selected by the machine to "minimize error." And crucially, it can't minimize it 100%. Whatever philosophical points you'd like to raise about intelligence or thinking, I don't think we would ever be willing to call someone intelligent if they just made something up in response to your query, because they think you really want it to be real, even when it isn't. Which points to the overall charade: it wants to LOOK intelligent, while not BEING intelligent, because that's what the engineers who built it wanted it to do.