One thing I haven't seen brought up much is that LLMs are basically stateless. To be conscious requires the ability for internal state to change. The weights dont change at all, but the rng seed and input/output text do. We're not seriously arguing that the text itself is the conscious part are we?
Is he joking to prove a point?
If software can be "conscious" then we need a new word to describe what it is that a person has that makes me care about them in a way I never would care about the output of a program.
Fighting about semantics is not as interesting as the question of whether we should care about and give rights to a program running in memory like we do the owner of a human brain.
What a clown
Discussions (35 points, 4 days ago, 71 comments) https://news.ycombinator.com/item?id=47988880
(75 points, 4 days ago, 124 comments) https://news.ycombinator.com/item?id=47991340
(17 points, yesterday, 17 comments) https://news.ycombinator.com/item?id=48025969
Yet just another human fooled by LLM ...
If I invented a machine that makes chimpanzee noises in response to input chimpanzee noise, put it in front of a chimpanzee, and watched the chimp coo and yell and screech and purr in response to the machine, I would not conclude "wow, I emulated a chimpanzee's consciousness!" I would say "huh, I made a device that's good at tricking chimpanzees."
My belief is that the Turing test (and LLMs in particular) are not categorically different. Language is a tiny part of the human brain because it's a tiny part of human cognition, despite its outsized impact socially.
[dead]
Step one make up an otological category with no unique content.
Step two declare it an imponderable mystery.
Step three argue confidently about it despite steps one and two.
NB. Humans, it doesn't matter if you are conscious.
NBB. Humans claim LLMs just manipulate words, and yet humans manipulate words to make this claim. Consciousness is a word. Not an ontology.