logoalt Hacker News

hexagatoday at 1:30 AM1 replyview on HN

It's really simple. RL on human evaluators selects for this kind of 'rhetorical structure with nonsensical content'.

Train on a thousand tasks with a thousand human evaluators and you have trained a thousand times on 'affect a human' and only once on any given task.

By necessity, you will get outputs that make lots of sense in the space of general patterns that affect people, but don't in the object level reality of what's actually being said. The model has been trained 1000x more on the former.

Put another way: the framing is hyper-sensical while the content is gibberish.

This is a very reliable tell for AI generated content (well, highly RL'd content, anyway).


Replies