logoalt Hacker News

Yokohiiiyesterday at 7:59 PM0 repliesview on HN

An LLMs "wrong" decision is either systemic or biased. They learn "common sense" from human input (i.e. shared datasets, reinforcement learning). If a decision is flat out wrong for you, asking 10 LLMs is unlikely to help.