If they're entirely physical, what's the argument that multimodal models don't have them? Is it continuity of experience? Do they not encode their input into something that has a latent space? What makes this differ from experience?
I do think AI will have them. Nothing says they can't. And we'll have just as hard a time defining it as we do with humans, and we'll argue how to measure it, and if it is real, just like with humans.
I don't know if LLM's will. But there are lots of AI models, and when someone puts them on a continuous learning loop with goals, will be hard to argue they aren't experiencing something.
They can be physical, but I'm not claiming to know definitively. The lines are extremely blurry, and I'll agree that current models have at least some of the necessary components for qualia, but again lack a sensory feedback loop. In another comment [0] I quote myself as saying:
which attempts to address why physically-based qualia doesn't invoke panpsychism.[0] https://news.ycombinator.com/item?id=46109999