Yo it was an engagement pattern openAI found specifically grew subscriptions and conversation length.
It’s a dark pattern for sure.
It doesn’t appear that anyone at OpenAI sat down and thought “let’s make our model more sycophantic so that people engage with it more”.
Instead it emerged automatically from RLHF, because users rated agreeable responses more highly.
It doesn’t appear that anyone at OpenAI sat down and thought “let’s make our model more sycophantic so that people engage with it more”.
Instead it emerged automatically from RLHF, because users rated agreeable responses more highly.