logoalt Hacker News

raegislast Sunday at 8:22 PM1 replyview on HN

> I don't think so. An LLM by default is not trained to be "good"; it's trained to be accurate.

I wouldn't use the word "accurate" since it creates language based on probabilities. For example, it occasionally does basic mathematics computations incorrectly. I'm sure the AI companies would say they are training for "accuracy" but the actual code they write says otherwise.


Replies

Terr_last Sunday at 10:57 PM

The problem isn't the word itself, the problem is people mixing up what it's accurate at. (Not helped by companies with a profit motive to encourage the confusion.)

Namely, LLMs are accurate at appending to a document things that "fit" what could go there.