That would be a silly argument because feelings involve qualia, which we do not currently know how to precisely define, recognize or measure. These qualia influence further perception and action.
Any relationships between certain words and a modified probabilistic outcome in current models is an artifact of the training corpus containing examples of these relationships.
I contend that modern models are absolutely capable of thinking, problem-solving, expressing creativity, but for the time being LLMs do not run in any kind of sensory loop which could house qualia.
> qualia, which we do not currently know how to precisely define, recognize or measure
> which could house qualia.
I postulate this is a self-negating argument, though.
I'm not suggesting that LLMs think, feel or anything else of the sort, but these arguments are not convincing. If I only had the transcript and knew nothing about who wiped the drive, would I be able to tell it was an entity without qualia? Does it even matter? I further postulate these are not obvious questions.
qualia may not exist as such. they could just be essentially 'names' for states of neurons that we mix and match (like chords on a keyboard. arguing over the 'redness' of a percept is like arguing about the C-sharpness of a chord. we can talk about some frequencies but that's it.) we would have no way of knowing otherwise since we only perceive the output of our neural processes, and don't get to participate in the construction of these outputs, nor sense them happening. We just 'know' they are happening when we achieve those neural states and we identify those states relative to the others.
>because feelings involve qualia, which we do not currently know how to precisely define, recognize or measure.
Do we know how to imprecisely define, recognize, or measure these? As far as I've ever been able to ascertain, those are philosophy department nonsense dreamt up by people who can't hack real science so they can wallow in unfounded beliefs.
>I contend that modern models are absolutely capable of thinking, problem-solving, expressing creativity,
I contend that they are not even slightly capable of any of that.
> That would be a silly argument because feelings involve qualia, which we do not currently know how to precisely define, recognize or measure.
If we can't define, recognize or measure them, how exactly do we know that AI doesn't have them?
I remain amazed that a whole branch of philosophy (aimed, theoretically, at describing exactly this moment of technological change) is showing itself up as a complete fraud. It's completely unable to describe the old world, much less provide insight into the new one.
I mean, come on. "We've got qualia!" is meaningless. Might as well respond with "Well, sure, but AI has furffle, which is isomporphic." Equally insightful, and easier to pronounce.
One of the worst or most uncomfortable logical outcomes of
> which we do not currently know how to precisely define, recognize or measure
is that if we don't know if something has qualia (despite externally showing evidence of it), morally you should default to treating it like it does.
Ridiculous to treat a computer like it has emotions, but breaking down the problem into steps, it's incredibly hard to avoid that conclusion. "When in doubt, be nice to the robot".