The whole “chat with an AI” paradigm is the culprit here. Priming people to think they are actually having a conversation with something that has a mind model.
It’s just a text generator that generates plausible text for this role play. But the chat paradigm is pretty useful in helping the human. It’s like chat is a natural I/O interface for us.
> It’s just a text generator that generates plausible text for this role play.
Often enough, that text is extremely plausible.
I pin just as much responsibility on people not taking the time to understand these tools before using them. RTFM basically.
I think the mindset you have to have is "it understands words, but has no concept of physics".
I disagree that it’s “just a text generator” but you are so right about how primed people are to think they’re talking to a person. One of my clients has gone all-in on openclaw: my god, the misunderstanding is profound. When I pointed out a particularly serious risk he’d opened up, he said, “it won’t do that, because I programmed it not to”. No, you tried to persuade it not to with a single instruction buried in a swamp of markdown files that the agent is itself changing!