Unless there is an active sensory loop, no matter how fast or slow, I don't see how qualia can enter the picture
Transformers attend to different parts of their input based on the input itself. Currently, if you want to tell an LLM it is sad, potentially altering future token prediction and labeling this as "feelings" which change how the model interprets and acts on the world, you have to tell the model that it is sad or provide an input whose token set activates "sad" circuits which color the model's predictive process.
You make the distribution flow such that it predicts "sad" tokens, but every bit of information affecting that flow is contained in the input prompt. This is exceedingly different from how, say, mammals process emotion. We form new memories and brain structures which constantly alter our running processes and color our perception.
It's easy to draw certain individual parallels to these two processes, but holistically they are different processes with different effects.
It's crazy how strong the Eliza effect is. Seemingly half or more of tech people (who post online, anyway) are falling for it, yet again.