LLMs are word prediction engines.
They clearly are not conscious, they are just guessing what words should come next.
The human brain is an electrical signal prediction machine.
Anything that looks like intelligence will look like a prediction machine because the alternative is logic being hardcoded apriori.
How do we know that that isn't essentially how our minds work?
> They clearly are not conscious
Consciousness is emergent. A human is not conscious by our definition until the moment they are. How will we be able to identify the singularity when it comes? I feel like this is what the article is really addressing.
> LLMs are word prediction engines
Humans can also do this too, so what are the missing parts for consciousness? Close a few loops on learning pipeline and we might be there.