> Note that none of this tells us whether language models actually feel anything or have subjective experiences.
You’ll never find that in the human brain either. There’s the machinery of neural correlates to experience, we never see the experience itself. That’s likely because the distinction is vacuous: they’re the same thing.
Do you think these llm's have subjective experiences? (by "subjective experience" I mean the thing that makes stepping on an ant worse than kicking a pebble) And if so, do you still use them? Additionaly: when do you think that subjectivity started? Was there a "there" there with gpt2?
I know I feel experience. I don't know for sure if you do, but it seems a very reasonable extension to other people. LLMs are a radical jump though that needs a greater degree of justification.
> That’s likely because the distinction is vacuous: they’re the same thing.
The Chinese Room would like a word.
See also: Functionalism [1].
[1] https://en.wikipedia.org/wiki/Functionalism_%28philosophy_of...
[dead]
LLMs are disembodied and exist outside of time.
Bundle of tokens comes in, bundle of tokens comes out. If there is any trace of consciousness or subjectivity in there, it exists only while matrices are being multiplied.