We need a difference to discover what it is. How can we know that all LLMs don't?
Even if they do, it can only be transiently during the inference process. Unlike a brain that is constantly undergoing dynamic electrochemical processes, an LLM is just an inert pile of data except when the model is being executed.
If you tediously work out the LLM math by hand, is the pen and paper conscious too?
Consciousness is not computation. You need something else.