What do you think is the difference, and why are you certain it must apply to AI? Why do you think human thought/emotion is an appropriate model for AI?
If it's all just information in the end, we don't know how much of all this is implementation detail and ultimately irrelevant for a system's ability to reason.
Because I am pretty sure AI researchers are first and foremost trying to make AI that can reason effectively, not AI that can have feelings.
Let's walk first before we run. We are no where near understanding what is qualia to even think we can do so.
It's been very very throughly research, in fact my father was a (non-famous, Michigan U, 60s era) researcher on this. Recommended reading: Damasio, A. R. (1994), Lazarus, R. S. (1991), LeDoux, J. E. (1996).
Why do I think it's appropriate, not to be rude but I'm surprised that isn't self evident. As we seek to create understanding machines and systems capable of what we ourselves can do, understanding how the interplay works in the context of artificial intelligence will help build a wider picture and that additional view may influence how we put together things like more empathetic robots, or anything driven by synthetic understanding.
AI researchers are indeed aiming to build effective reasoners first and foremost, but effective reasoning itself is deeply intertwined with emotional and affective processes, as demonstrated by decades of neuroscience research... Reasoning doesn’t occur in isolation...human intelligence isn't some purely abstract, disembodied logic engine. The research I provided shows it's influenced by affective states and emotional frameworks. Understanding these interactions should show new paths toward richer more flexible artificial understanding engines, obvs this doesn't mean immediately chasing qualia or feelings for their own sake, it's just important to recognize that human reasoning emerges from an integrated cognitive/emotional subsystems.
Surly ignoring decades of evidence on how emotional context shapes human reasoning limits our vision, narrowing the scope of what AI could ultimately achieve?