LLMs are models that predict tokens. They don't think, they don't build with blocks. They would never be able to synthesize knowledge about QM.
You realize parent said "This would be an interesting way to test proposition X" and you responded with "X is false because I say say", right?
I am a deep LLM skeptic.
But I think there are also some questions about the role of language in human thought that leave the door just slightly ajar on the issue of whether or not manipulating the tokens of language might be more central to human cognition than we've tended to think.
If it turned out that this was true, then it is possible that "a model predicting tokens" has more power than that description would suggest.
I doubt it, and I doubt it quite a lot. But I don't think it is impossible that something at least a little bit along these lines turns out to be true.