logoalt Hacker News

applfanboysbgontoday at 7:38 AM0 repliesview on HN

This is a horrendous take. The only thing this is going to do / is already doing is increasing people's creation of their own reality bubble. LLMs are not some source of objective truth, they will inevitability lean towards reinforcing either (1) prompter's position, (2) the model trainer's position, or (3) the statistically average position, none of which are guaranteed to be logically correct. But people do take them as objective truth, so now we have a bunch of fucking morons going around saying "see, ChatGPT says so, I'm right!".