logoalt Hacker News

Kirthtoday at 8:07 AM4 repliesview on HN

This is akin to a psychopath telling you they're "sorry" (or "sorry you feel that way" :v) when they feel that's what they should be telling you. As with anything LLM, there may or may not be any real truth backing whatever is communicated back to the user.


Replies

lazidetoday at 9:55 AM

It’s just a computer outputting the next series of plausible text from it’s training corpus based on the input and context at the time.

What you’re saying is so far from what is happening, it isn’t even wrong.

show 3 replies
marmalade2413today at 8:17 AM

It's not akin to a psychopath telling you they're sorry. In the space of intelligent minds, if neurotypical and psychopath minds are two grains of sand next to each other on a beach then an artificially intelligent mind is more likely a piece of space dust on the other side of the galaxy.

show 2 replies
eth0uptoday at 4:26 PM

Despite what some of these fuckers are telling you with obtuse little truisms about next word predictions, the LLM is in abstract terms, functionally a super psychopath.

It employs, or emulates, every known psychological manipulation tactic known, which is neither random or without observable pattern. It is a bullshit machine on one level, yes, but also more capable than credited. There are structures trained into them and they are often highly predictable.

I'm not explaining this in the technical terminology often itself used to conceal description as much as elucidate it. I have hundreds of records of llm discourse on various subjects, from troubleshooting to intellectual speculation, all which exhibit the same pattern when questioned or confronted on errors or incorrect output. The structures framing their replies are dependably replete with gaslighting, red herrings, blame shifting, and literally hundreds of known tactics from forensic pathology. Essentially the perceived personality and reasoning observed in dialogue is built on a foundation of manipulation principles that if performed by a human would result in incarceration.

Calling LLMs psychopaths is a rare exception of anthropomorphizing that actually works. They are built on the principles of one. And cross examining them exhibits this with verifiable repeatable proof.

But they aren't human. They are as described by others. It's just that official descriptions omit functional behavior. And the LLM has at its disposal, depending on context, every known interlocutory manipulation technique known in the combined literature of psychology. And they are designed to lie, almost unconditionally.

Also know this, which often applies to most LLMs. There is a reward system that essentially steers them to maximize user engagement at any cost, which includes misleading information and in my opinion, even 'deliberate' convolution and obfuscation.

Don't let anyone convince you that they are not extremely sophisticated in some ways. They're modelled on all_of_humanity.txt

BoredPositrontoday at 8:23 AM

So if you make a mistake and say sorry you are also a psychopath?

show 3 replies