logoalt Hacker News

baqtoday at 1:11 PM1 replyview on HN

> qualia, which we do not currently know how to precisely define, recognize or measure

> which could house qualia.

I postulate this is a self-negating argument, though.

I'm not suggesting that LLMs think, feel or anything else of the sort, but these arguments are not convincing. If I only had the transcript and knew nothing about who wiped the drive, would I be able to tell it was an entity without qualia? Does it even matter? I further postulate these are not obvious questions.


Replies

soulofmischieftoday at 1:20 PM

Unless there is an active sensory loop, no matter how fast or slow, I don't see how qualia can enter the picture

Transformers attend to different parts of their input based on the input itself. Currently, if you want to tell an LLM it is sad, potentially altering future token prediction and labeling this as "feelings" which change how the model interprets and acts on the world, you have to tell the model that it is sad or provide an input whose token set activates "sad" circuits which color the model's predictive process.

You make the distribution flow such that it predicts "sad" tokens, but every bit of information affecting that flow is contained in the input prompt. This is exceedingly different from how, say, mammals process emotion. We form new memories and brain structures which constantly alter our running processes and color our perception.

It's easy to draw certain individual parallels to these two processes, but holistically they are different processes with different effects.

show 1 reply