But it IS intentional, more sycophantry usually means more engagement.
Sort of. I'm not sure the consequences of training LLM's based on users' upvoted responses were entirely understood? And at least one release got rolled back.
Sort of. I'm not sure the consequences of training LLM's based on users' upvoted responses were entirely understood? And at least one release got rolled back.