logoalt Hacker News

Folk are getting dangerously attached to AI that always tells them they're right

111 pointsby Brajeshwartoday at 2:49 PM63 commentsview on HN

Comments

joshstrangetoday at 3:21 PM

When a LLM tells me I'm right, especially deep in a conversation, unless I was already sure about something, I immediately feel the need to go ask a fresh instance the question and/or another LLM. It sets off my "spidey-sense".

I don't quite understand why other people seem to crave that. Every time I read about someone who has gone down a dark road using LLMs I am constantly amazed at how much they "fall" for the LLM, often believing it's sentient. It's just a box of numbers, really cool numbers, with really cool math, that can do really cool things, but still just numbers.

show 16 replies
bluesidetoday at 4:20 PM

More often than not, when I see "That's it, that's the smoking gun!" I know it's time to stop and try again.

show 1 reply
jameskiltontoday at 3:19 PM

Folks are getting dangerously attached to [political parties/candidates/news sources/social networks] that always tell them they're right.

It's really nothing new. It takes significant mental energy (a finite resource) to question what you're being told, and to do your own fact checking. Instead people by default gravitate towards echo chambers where they can feel good about being a part of a group bigger than themselves, and can spend their limited energy towards what really matters in their lives.

show 3 replies
4b11b4today at 3:32 PM

https://arxiv.org/abs/2602.14270

related: if you suggest a hypothesis then you'll get biased results (iow, you'll think you're right, but the true information is hidden)

kgeisttoday at 4:10 PM

>We evaluated 11 state-of-the-art AI-based LLMs, including proprietary models such as OpenAI’s GPT-4o

The study explores outdated models, GPT-4o was notoriously sycophantic and GPT-5 was specifically trained to minimize sycophancy, from GPT-5's announcement:

>We’ve made significant advances in reducing hallucinations, improving instruction following, and minimizing sycophancy

And the whole drama in August 2025 when people complained GPT-5 was "colder" and "lacked personality" (= less sycophantic) compared to GPT-4o

It would be interesting to study evolution of sycophantic tendencies (decrease/increase) in models from version to version, i.e. if companies are actually doing anything about it

show 1 reply
zone411today at 4:45 PM

I built two related benchmarks this month: https://github.com/lechmazur/sycophancy and https://github.com/lechmazur/persuasion. There are large differences between LLMs. For example, good luck getting Grok to change its view, while Gemini 3.1 Pro will usually disagree with the narrator at first but then change its position very easily when pushed.

JohnCClarketoday at 3:14 PM

Isn't this just Dale Carnegie 101? I've certainly never had a salesperson tell me that I'm 100% wrong and being a fool.

And, tbh, I often try to remember to do the same.

show 2 replies
My_Nametoday at 3:57 PM

I have the opposite reaction, when it is confident, or says I am right, I accuse it of guessing to see what it says.

I say "I think you are getting me to chase a guess, are you guessing?"

90% of the time it says "Yes, honestly I am. Let me think more carefully."

That was a copypasta from a chat just this morning

show 2 replies
jasonlotitotoday at 3:17 PM

Krafton's CEO found out the hard way that relying on AI is dumb, too. I think it's always helpful to remind people that just because someone has found success doesn't mean they're exceptionally smart. Luck is what happens when a lack of ethics and a nat 20 meet.

https://courts.delaware.gov/Opinions/Download.aspx?id=392880

> Meanwhile, Kim sought ChatGPT’s counsel on how to proceed if Krafton failed to reach a deal with Unknown Worlds on the earnout. The AI chatbot prepared a “Response Strategy to a ‘No-Deal’ Scenario,” which Kim shared with Yoon. The strategy included a “pressure and leverage package” and an “implementation roadmap by scenario.”

erelongtoday at 3:14 PM

So, be more skeptical

show 2 replies
kogasa240ptoday at 3:07 PM

The ELIZA effect is alive and well, and I'm surprised people aren't talking about it more (probably because it sounds less interesting than "AI psychosis").

show 1 reply
simonwtoday at 3:16 PM

Strikes me this is another example of AI giving everyone access to services that used to be exclusive to the super-rich.

Used to be only the wealthiest students could afford to pay someone else to write their essay homework for them. Now everyone can use ChatGPT.

Used to be you had to be a Trumpian-millionaire/Elonian-billionaire to afford an army of Yes-men to agree with your every idea. Now anyone can have that!

jmclnxtoday at 3:05 PM

I never thought this could happen, but I do not use AI.

Anyway no real surprise, we have many examples of people ignoring facts and moving to media that support their views, even when their views are completely wrong. Why should AI be different.

sizzzzlerztoday at 3:12 PM

Imagine that.

vincentabolarintoday at 4:22 PM

[dead]

elicohen1000today at 4:16 PM

[dead]

seankwon816today at 3:56 PM

[dead]

jmcgoughtoday at 3:30 PM

[dead]

lucideertoday at 3:07 PM

I've observed this in all chatbots with the single exception being Grok. I initially wondered what the Twitter engineers were cooking to to distinguish their product from the rest, but more recently it's occurred to me that it's probably just the result of having shared public context, compared to private chats (I haven't trialled Grok privately).

show 1 reply
AbrahamParangitoday at 3:54 PM

AI is less deranging than partisan news and social media, measurably so according to a recent study https://www.ft.com/content/3880176e-d3ac-4311-9052-fdfeaed56...