LLMs cannot possess consciousness for three reasons: they execute as a sequence of Transformer blocks with extremely limited information exchange, these blocks are simple feed-forward networks with no recurrent connections, and the computer hardware follows a modular design.
Shardlow & Przybyła, "Deanthropomorphising NLP: Can a Language Model Be Conscious?" (PLOS One, 2024)
Nature: "There is no such thing as conscious artificial intelligence" (2025)
They argue that the association between consciousness and LLMs is deeply flawed, and that mathematical algorithms implemented on graphics cards cannot become conscious because they lack a complex biological substrate. They also introduce the useful concept of "semantic pareidolia" - we pattern-match consciousness onto things that merely talk convincingly.
They are making a strong argument and I think they are correct. But really these are two different things as I said originally.
You think I'm arguing that LLM's are sentient. I'm not. I never mentioned LLMs.