logoalt Hacker News

joshstrangetoday at 3:21 PM21 repliesview on HN

When a LLM tells me I'm right, especially deep in a conversation, unless I was already sure about something, I immediately feel the need to go ask a fresh instance the question and/or another LLM. It sets off my "spidey-sense".

I don't quite understand why other people seem to crave that. Every time I read about someone who has gone down a dark road using LLMs I am constantly amazed at how much they "fall" for the LLM, often believing it's sentient. It's just a box of numbers, really cool numbers, with really cool math, that can do really cool things, but still just numbers.


Replies

Sharlintoday at 3:34 PM

Nontechnical people simply don't have any idea about what LLMs are. Their only mental model comes from science fiction, plus the simple fact that we possess a theory of mind. It would be astonishing if people were able to casually not anthropomorphize LLMs, given that untold millions of years worth of evolution of the simian neocortex is trying to convince you that anything that talks like that must be another mind similar to yours.

Also, many many people suffer from low self esteem, and being showered with endorsement and affirmation by something that talks like an authority figure must be very addictive.

show 5 replies
legacynltoday at 5:16 PM

Although I do think they're not conscious (yet). I think the reasoning 'it's just math' doesn't hold up. Intelligence (and probably consciousness) is an emergent feature of any sufficiently complex network of learning/communicating/selforganizing nodes (that is benefited by intelligence). I don't think it really matters whether it's implemented in math, mycelium, by ants in a hive or in neurons.

show 1 reply
shevy-javatoday at 6:01 PM

> I don't quite understand why other people seem to crave that.

I don't know either but it could be they are using it as a quality control system? Aka if flattery comes (from AI), assume that the quality of code is above average. Or something like that.

One could try this in a real team - have someone in the team constantly flatter someone else. :)

karmakurtisaanitoday at 3:39 PM

I find it really annoying that the first line of the AI response is always something like "Great question!", "That's a great insight!" or the like.

I don't need the patronizing, just give me the damn answer..

show 6 replies
jmcgoughtoday at 3:42 PM

If you don't have a CS background, you might see intelligent-appearing responses to your queries and assume that this is actual intelligence. It's like a lifetime of Hollywood sci-fi has primed them for this type of thinking, I've seen it even from highly educated people in other fields.

sunirtoday at 5:08 PM

You’re just a bag of meat. That is why it’s just math is an unsatisfying argument.

It’s not even an interesting question. Sentience has no definition. It’s meaningless.

People have needs that are being met. That is something we can meaningfully observe and talk about. Is the super stimulus beneficial or harmful? We can measure that.

sjducbtoday at 5:02 PM

I’m curious why you dismiss the sentience argument with its “just numbers.”

I think our brains are just a bunch of cells and one day we will have a full understanding of how our brains work. Understanding the mechanism won’t suddenly make us not sentient.

LLMs are the first technology that can make a case for its own sentience. I think that’s pretty remarkable.

al_borlandtoday at 4:27 PM

With that new instance, I will usually ask the opposite and purposely say the thing I think to be wrong, to see if if corrects it.

I often simply start out this way, or purposely try to ask the question in a way that doesn’t tip my hat toward a bias I may have toward the answer I’m expecting. Though this generally highlights how incomplete the answers generally are.

windexh8ertoday at 4:24 PM

I think this is the root of why people defend AI in some circumstances. They feel a give-for-get type of relationship where the AI continuously (and oft incorrectly) reinforces them. And so they enjoy it and subconsciously want to defend that "friendly". No different than defending a friend that you inherently know may be off base.

46Bittoday at 3:31 PM

Life in the moment is a lot easier if you don't second-guess yourself. I think this is why many people (and probably ~all people, if tired) crave simplistic solutions.

I like to make a subagent take the "devil's advocate" take on a subject. It usually does all the arguing for me as to why the main agent has it wrong. Commonly results in better decisions that I'd have made alone.

Asking the agent to interview on why I disagree helps too but is more effort.

hirako2000today at 3:51 PM

If only we were told to be absolutely right.

These days most LLMs respond with unsolicited grandiose feedback: you've made a realisation very few people are capable of. Your understanding is remarkable. You prove to have a sharp intellect and deep knowledge.

It got me to test throwing non sensical observations about the world, it always takes me side and praise my views.

To note some people are like that too.

xenocratustoday at 4:40 PM

> It's just a box of numbers, really cool numbers, with really cool math, that can do really cool things, but still just numbers.

https://www.eastoftheweb.com/short-stories/UBooks/TheyMade.s...

cineticdaffodiltoday at 4:19 PM

Its the soul of a civilization encoded into numbers. Its the ultimate hivespirit an conformist wants to loose itself in.

saltcuredtoday at 4:54 PM

I have recently formed an untestable hypothesis, which is that my similar (or stronger) resistance to this comes from having grown up in direct contact with mentally ill family.

In some ways, my theory of mind includes a lot more second guessing as a defense mechanism. At a foundational level, I know there can be hallucination and delusion that leaks out, even when the other party is in peak form and doing their best to mask it and pass as functional.

moralestapiatoday at 5:12 PM

>I don't quite understand why other people seem to crave that.

I work in the restaurant business, I think that's what make me develop that sense as well, being able to see "Everything Everywhere All at Once" (to quote some of the best cinematic work ever conceived).

The variety of human minds out there is so vast that I'm, just like you, constantly amazed about it.

throwatdem12311today at 4:44 PM

My first reaction is to go research it myself. Asking a slop generator yes-man to criticize something for you is still slop.

I pretty much never ask an LLM for a judgment call on anything. Give me facts and references only. I will research and make the judgement myself.

cyanydeeztoday at 3:27 PM

I think it's basically equal to End of Line when it comes to an LLM. It means they have nothing else to add, there's zero context for them to draw from, and they've exhausted the probability chain you've been following; but they're creating to generate 'next token' and positive renforcement is _how they are trained_ in many cases so the token of choice would naturally be how they're trained, since it's a probability engine but it doesn't know the difference between the instruction and the output.

So, "great idea" is coming from the renforcement learning instruction rather than the answer portion of the generation.

danillonunestoday at 4:08 PM

> I am constantly amazed at how much they "fall" for the LLM, often believing it's sentient.

Cynical part of me had this theory that, at least for part of them, it's the other way around. It's not that they see AI as sentient, it's that they never have seen other human beings like that in the first place. Other people are just means for them to reach their goals, or obstacles. In that sense, AI is not really different for them. Except they're cheaper and be guaranteed to always agree with them.

That's why I believe CEOs, who are more likely to be sociopaths by natural selection, genuinely believe AI is a good replacement for people. They're not looking for individuals with personal thoughts that may contradict with theirs at some point, they're looking for yes-men as a service.

show 1 reply
dismalaftoday at 4:26 PM

Not only is it a "box of numbers", it's based on statistics, not a "hard" model of computation. Basically guessing future words based on past words that went together.

If it's saying something like "you are right" it's because it's guessing that that's the desired output. Now of course, some app providers have added some extra sauce (probably more tradition "expert system" AI techniques + integrated web search) to try make the chatbots more objective and rely less on pure LLM-driven prediction, but fundamentally these things are word prediction machines.

senecatoday at 4:08 PM

> ... I immediately feel the need to go ask a fresh instance the question and/or another LLM

Not to criticize at all, but it's remarkable that LLMs have already become so embedded that when we get the sense they're lying to us, the instinct is to go ask another LLM and not some more trustworthy source. Just goes to show that convenience reigns supreme, I suppose.

show 3 replies
the_aftoday at 5:01 PM

> I don't quite understand why other people seem to crave that

It's one thing to say you have found an effective method to counter LLMs' "positivity bias", but do you really not understand human psychology here?

People respond positively to other people telling them they are right, or who like them. We've evolved this psychology, it's how the human mind works. You tend like people who like you, it's a self-reinforcing loop. LLMs in a sense exploit this natural bias.

> I am constantly amazed at how much they "fall" for the LLM, often believing it's sentient.

Why are you surprised? This is the illusion most AI companies are selling. Their chat-like interfaces are designed to fool you into thinking you're talking to a sentient being. And let's not get started with their voice interfaces!