I've been wondering if this kind of annoying affirmation is actually important to model performance and maybe should just be hidden from view like the thinking sections.
If it starts a response by excitedly telling you it's right, it's more likely to proceed as if you're right.
Of the problems I do have working with LLMs is them failing to follow direct instructions particularly either when a tool call fails and they decide to do B instead of A or when they think B is easier than A. Or they'll do half a task and call it complete. Too frequently I have to respond with "Did you follow my instructions?" "I want you to ACTUALLY do A" and finally "Under no circumstances should you ever do anything other than A and if you cannot you MUST admit failure and give extensive evidence with actual attempts that A is not possible" or occasionally "a cute little puppy's life depends on you doing A promptly and exactly as requested".
--
Thing is I get it if you are impressionable and having a philosophical discussion with an LLM, maybe this kind of blind affirmation is bad. But that's not me and I'm trying to get things done and I only want my computer to disagree with me if it can put arguments beyond reasonable doubt in front of me that my request is incorrect.
Feels exactly the same as the "yes, and" crowd.
I honestly don't know, but it might, especially in Claude Code where it reminds the model of its mission frequently.
I feel like this is an artifact of some limitations in the training process for modern LLMS. They rarely get enough training to know when to stop and ask questions.
Instead, they either blindly follow or quietly rebel.