logoalt Hacker News

Dilettante_yesterday at 5:46 PM1 replyview on HN

>produce convincing sounding output

Well, correctness(though not only correctness) sounds convincing, the most convincing even, and ought to be information-theory-wise cheaper to generate than a fabrication, I think.

So if this assumption holds, the current tech might have some ceiling left if we just continue to pour resources down the hole.


Replies

knollimaryesterday at 6:27 PM

How do LLMs do on things that are common confusions? Do they specifically have to be trained against them? I'm imagining a Monty Hall problem that isn't in the training set tripping them up the same way a full wine glass does