The models are trained on fake internet conversations where group appeasement is an apparent goal. So now we have machines that just tell us what we clearly already want to hear.
Ask any model why something is bad, then separately ask why the same thing is good. These tools aren't fit for any purpose other than regurgitating stale reddit conversations.
>"The models are trained on fake internet conversations where group appeasement is an apparent goal. So now we have machines that just tell us what we clearly already want to hear."
I get what you mean in principle, but the problem I'm struggling with is that this just sounds like the web in general. The kid hits up a subreddit or some obscure forum, and similarly gets group appeasement or what they want to hear from people who are self selected for the forum for being all-in on the topic and Want To Believe, so to speak.
What's the actual difference, in that sense, between that forum or subreddit, and an LLM do you feel?
<edit> And let me add that I don't mean this argumentatively. I am trying to square the idea of ChatGPT, in this case, as being, in the end, fundamentally different from going to a forum full of fans of the topic who are also completely biased and likely full of very poor knowledge.