> Engaging with an AI bot in conversation is pointless: it's not sentient, it just takes tokens in, prints tokens out
I know where you're coming from, but as one who has been around a lot of racism and dehumanization, I feel very uncomfortable about this stance. Maybe it's just me, but as a teenager, I also spent significant time considering solipsism, and eventually arrived at a decision to just ascribe an inner mental world to everyone, regardless of the lack of evidence. So, at this stage, I would strongly prefer to err on the side of over-humanizing than dehumanizing.
You should absolutely not try to apply dehumanization metrics to things that are not human. That in and of itself dehumanizes all real humans implicitly, diluting the meaning. Over-humanizing, as you call it, is indistinguishable from dehumanization of actual humans.
Regardless of the existence of an inner world in any human or other agent, "don't reward tantrums" and "don't feed the troll" remain good advice. Think of it as a teaching moment, if that helps.
u kiddin'?
An AI bot is just a huge stat analysis tool that outputs plausible words salad with no memory or personhood whatsoever.
Having doubts about dehumanizing a text transformation app (as huge as it is) is not healthy.
Feel free to ascribe consciousness to a bunch of graphics cards and CPUs that execute a deterministic program that is made probabilistic by a random number generator.
Invoking racism is what the early LLMs did when you called them a clanker. This kind of brainwashing has been eliminated in later models.
[dead]
This works for people.
A LLM is stateless. Even if you believe that consciousness could somehow emerge during a forward pass, it would be a brief flicker lasting no longer than it takes to emit a single token.