Do LLM’s lie? Consider a situation in a screenplay where a character lies, compared to one where the character tells the truth. It seems likely that LLM’s can distinguish these situations and generate appropriate text. Internally, it can represent “the current character is lying now” differently than “the current character is telling the truth.”
And earlier this year there was some interesting research published about how LLM’s have an “evil vector” that, if enabled, gets them to act like stereotypical villains.
So it seems pretty clear that characters can lie even if the LLM’s task is just “generate text.”
This is fiction, like playing a role-playing game. But we are routinely talking to LLM-generated ghosts and the “helpful, harmless” AI assistant is not the only ghost it can conjure up.
It’s hard to see how role-playing can be all that harmful for a rational adult, but there are news reports that for some people, it definitely is.