> Humans must not anthropomorphise AI systems.
Can someone explain why this is a bad thing, while at the same time it's a good thing to say stuff like "put a computer to sleep", "hibernate", "killing" processes, processes having "child" processes, "reaping", "what does the error say?", "touch", etc?
To me that's just language, and humans just using casual language.
It's a great question, because I do think there are many cases that are neutral, or ones we're able to responsibly distinguish or even cases where it would be an appropriate and necessary form of empathy (I'm imagining some future sci-fi reality where we actually get conscious machines, so not something that exists right now).
But I think it's also at the root of disastrous failures to comprehend, like the quasi-psychosis of the Google engineer who "knows what they saw", the now infamous Kevin Roose article or, more recently, the pitifully sad Richard Dawkins claim that Claudia (sic) must be conscious, not because of any investigation of structure or function whatsoever, but because the text generation came with a pang of human familiarity he empathized with.
Because it allows you to be lulled into the trap of asking an AI to post-hoc justify something it did and thinking that the response is in any way valid. There is no retrospective analysis of the underlying intent. It either is or is not based on the chain of words that came before it. And the next word it generates is purely a function of those words.
The difference is never before has the presentation of a computer and its capabilities made the person on the other end decide "Wow, this is like talking to a real person. I'm gonna date this computer"
Those phrases are not anthropomorphizing the computers. Just various forms of analogies and broadening of word meanings.
An example of anthropomorphizing is the people who have literally come to believe they are in romantic relationships with an LLM.
These are just words, yes, and I believe it harmless. But describing the LLM machinery as if it thinks is one thing when used as a common parlance, and another when people truly believe that there's some actual thinking or living going on. This "law" is for there to be no latter.
Maybe read the corresponding section of the article.
The people who know what a "child process" is are under no false pretenses about the humanity of the underlying system.
The people who are writing op eds in major news publications about how their favorite chatbot is an "astonishing creature" and how it truly understands them are the ones who need this sort of law.
That’s a different thing altogether. Read up on the history of Eliza, one of the earliest attempts at a chatbot and its unsettling implications.
https://www.history.com/articles/ai-first-chatbot-eliza-arti...
There's a boundary between knowing vs. forgetting that it's a metaphor. When you use convenient language like in your examples, you tend to remain aware of the difference, or at least you can recall it when asked. When some people talk about AI, they've lost track completely.
I don't love the recommendations in TFA. The author is trying to artificially restrain and roll back human language, which has already evolved to treat a chatbot as a conversational partner. But I do think there's usefulness in using these more pedantic forms once in a while, to remind yourself that it's just a computer program.
Dijkstra once said that "The question of whether machines can think is about as interesting as that of whether submarines can swim."
I think I understand his meaning. He wasn't claiming that machines cannot think, but that one must be clear on what one means by "thinking" and "swimming" in statements of that sort. I used to work on autonomous submarines, and "swimming" was the verb we casually used to describe autonomous powered movement under water. There are even some biomimetic machines that really move like fish, squids, jellyfish, etc. Not the ones that I worked on, but still.
For me, if it's legitimate to say that these devices swim, it's not out of line to say that a computer thinks, even in a non-AI context, e.g.: "The application still thinks the authentication server is online."
The people who advocate for not anthropomorphizing are afraid of the implications of integrating these systems into society with implicit human framing. By attributing to AIs human qualities, we will develop empathy for them and we will start to create a role for them in society as a being deserving moral consideration.
The harm is in actually believing AI has wants, intentions, feelings, etc.
Saying that I killed a process won't make me more likely to believe that a process is human-like, because it's quite obviously not.
But because AI does sound like a human, anthropomorphising it will reinforce that belief.