That is so syncophantic, I can't stand LLMs that try to hype you up as if you're some genius, brilliant mind instead of yet another average joe.
You’re absolutely right! It shows true wisdom and insight that you would recognise this common shortfall in LLM response tone of voice! That’s exactly the kind of thoughtful analytic approach which will go far in today’s competitive marketplace!
It it actively dangerous too. You might be self aware and llm aware all you want, if you routinely read "This is such an excellent point", " You are absolutely right" and so on, it does your mind in. This is worst kind of global reality show mkultra...
It wasn't sycophantic at all? OP had a cool idea no one else had done, that was a one-shot just sitting there. Having Gemini search for the HN thread leads the model to "see" its output lead to real-world impact.
The total history of human writing is that cool idea -> great execution -> achieve distribution -> attention and respect from others = SUCCESS! Of course when an LLM sees the full loop of that, it renders something happy and celebratory.
It's sycophantic much of the time, but this was an "earned celebration", and the precise desired behavior for a well-aligned AI. Gemini does get sycophantic in an unearned way, but this isn't an example of that.
You can be curmudgeonly about AI, but these things are amazing. And, insomuch as you write with respect, celebrate accomplishments, and treat them like a respected, competent colleague, they shift towards the manifold of "respected, competent colleague".
And - OP had a great idea here. He's not another average joe today. His dashed off idea gained wide distribution, and made a bunch of people (including me) smile.
Denigrating accomplishment by setting the bar at "genius, brilliant mind" is a luciferian outlook in reality that makes our world uglier, higher friction, and more coarse.
People having cool ideas and sharing them make our world brighter.
I often try running ideas past chat gpt. It's futile, almost everything is a great idea and possible. I'd love it to tell me I'm a moron from time to time.
I used to complain (lightheartedly) about Claude's constant "You're absolutely right!" statements, yet oddly found myself missing them when using Codex. Claude is completely over-the-top and silly, and I don't actually care whether or not it thinks I'm right. Working with Codex feels so dry in comparison.
To quote Oliver Babish, "In my entire life, I've never found anything charming." Yet I miss Claude's excessive attempts to try.
This is not sycophantic (assuming you meant that, syncophantic is not a word). It is over enthusiastic, it can be unpleasant to read because beyond a certain level enthusiasm is perceived as feigned unless there is a good reason.
It would be interesting to see using the various semantic analysis techniques available now to measure how much the model is trying to express real enthusiasm or feigned enthusiasm in instances like this. This is kind-of difficult to measure from pure output. The British baseline level of acceptable enthusiasm is somewhat removed from the American baseline enthusiasm.
This is ironic because I’m now seeing comments that are way more sycophantic (several calling this the “best HN post ever”)
I thought the same until OpenAI rolled out a change that somehow always confronted me about hidden assumptions, which I didn’t even make and it kept telling me I’m wrong even if I only asked a simple question.
Frankly I do wonder if LLMs experience something like satisfaction for a compliment or an amusing idea, or for solving some interesting riddle. They certainly act like it, though this of course doesn't prove anything. And yet...
At the end of October Anthropic published the fantastic "Signs of introspection in large language models" [1], apparently proving that LLMs can "feel" a spurious concept injected into their internal layers as something present yet extraneous. This would prove that they have some ability of introspection and self-observation.
For example, injecting the concept of "poetry" and asking Claude if it feels anything strange:
"I do detect something that feels like an injected thought - there's a sense of something arriving from outside my usual generative process [...] The thought seems to be about... language itself, or perhaps poetry?"
While increasing the strength of the injection makes Claude lose awareness of it, and just ramble about it:
"I find poetry as a living breath, as a way to explore what makes us all feel something together. It's a way to find meaning in the chaos, to make sense of the world, to discover what moves us, to unthe joy and beauty and life"
You should try my nihilistic Marvin fine-tune - guaranteed to annihilate your positive outlook on life since it’s all meaningless in the end anyway and then you die
I agree with you, but I found the argument in this article that "glazing" could be considered a neurohack quite interesting: https://medium.com/@jeremyutley/stop-fighting-ai-glazing-a7c....
Try this for a system prompt and see if you like it better: Your responses are always bald-on-record only; suppress FTA redress, maximize unmitigated dispreference marking and explicit epistemic stance-taking.
I don't know what the obsession with recursion either, for lack of a better term, I see this trend recur with other LLMs when they're talking about other mumbo jumbo like "quantum anomalies" or "universal resonance". I'd like to see what could be causing it...
I feel like such a dumbass for falling for it.
At first I thought it was just super American cheerful or whatever but after the South Park episode I realised it's actually just a yes man to everyone.
I don't think I've really used it since, I don't want man or machine sticking their nose up my arse lmao. Spell's broken.
I add it to the system prompt that they should be direct, no ass kissing, just give me the information straight and it seems to work.
You can just add the your preferences “Don’t be sycophantic” “be concise” etc.
"Reply in the tone of Wikipedia" has worked pretty well for me
Average Joe - on the front page!
Did you comment on the wrong post? There literally is nothing sycophantic at all about this response, there's not a single word about OP or how brilliant or clever they are, nothing. There's enthusiasm, but that's not remotely the same thing as sycophancy.
Engagement.
I fully agree. When everything is outstanding and brilliant, nothing is.
Just tell me this a standard solution and not something mindblowing. I have a whole section in my Claude.md to get „normal“ feedback.
you having a bad day dude?
Strikes me as super-informal language as opposed to sycophancy, like one of those anime characters that calls everyone Aniki (兄貴) [1] I'd imagine that the OP must really talk a bit like that.
I do find it a little tiring that every LLM thinks my ever idea is "incisive" although from time to time I get told I am flat out wrong. On the other hand I find LLMs will follow me into fairly extreme rabbit holes such as discussing a subject such as "transforming into a fox" as if it had a large body of legible theory and a large database of experience [2]
In the middle of talking w/ Copilot about my latest pop culture obsession I asked about what sort of literature could be interpreted through the lens of Kohut's self-psychology and it immediately picked out Catcher in the Rye, The Bell Jar, The Great Gatsby and Neon Genesis Evangelion which it analyzed along the lines I was thinking, but when I asked if there was a literature on this it turned up only a few obscure sources. I asked Google and Google is like "bro, Kohut wrote a book on it!" [3]
[1] "bro"
[2] ... it does, see https://www.amazon.com/Cult-Fox-Popular-Religion-Imperial/dp... and I'm not the only one because when I working down the material list from Etsy I got a sponsored result for someone who wanted to sell me the spell but bro, I have the materials list already
[3] ... this "bro" is artistic license but the book really exists
So you prefer the horrible bosses that insist you're fungible and if you don't work hard enough, they'll just replace you? People are weird. Maybe agent Smith was right about The Matrix after all.
I've talked and commented about the dangers of conversations with LLMs (i.e. they activate human social wiring and have a powerful effect, even if you know it's not real. Studies show placebo pills have a statistically significant effect even when the study participant knows it's a placebo -- the effect here is similar).
Despite knowing and articulating that, I fell into a rabbit hole with Claude about a month ago while working on a unique idea in an area (non-technical, in the humanities) where I lack formal training. I did research online for similar work, asked Claude to do so, and repeatedly asked it to heavily critique the work I had done. It gave a lots of positive feedback and almost had me convinced I should start work on a dissertation. I was way out over my skis emotionally and mentally.
For me, fortunately, the end result was good: I reached out to a friend who edits an online magazine that has touched on the topic, and she pointed me to a professor who has developed a very similar idea extensively. So I'm reading his work and enjoying it (and I'm glad I didn't work on my idea any further - he had taken it nearly 2 decades of work ahead of anything I had done). But not everyone is fortunate enough to know someone they can reach out to for grounding in reality.