logoalt Hacker News

liampulleslast Monday at 7:31 AM1 replyview on HN

It seems to me that Claude's error here (which is not unique to it) is self-sycophancy. The model is too eager to convince itself it did a good job.

I'd be curious to hear from experienced agent users if there is some AGENTS.md stuff to make the LLM more clear speaking? I wonder if that would impact the quality of work.


Replies

aprilfoolast Monday at 9:13 AM

> It seems to me that Claude's error here (which is not unique to it) is self-sycophancy. The model is too eager to convince itself it did a good job.

It seems this applies to the whole AI industry, not just LLMs.