logoalt Hacker News

sb057today at 3:59 PM2 repliesview on HN

Well yeah, LLMs generate resumes (and other text) that they judge as superior to alternative plausible texts. Why would that judgement change just because a different instance hasn't seen it before? To anthropomorphize it, it's like having a hiring manager write a resume, get amnesia, and then have to judge it among other resumes.


Replies

Ekarostoday at 4:11 PM

Seems like obvious thing. If LLM have some weights involved on what is good resume to write there is very likely correlation to what would be good resume to rate. And this is probably a even good thing, at least from model quality perspective. Model itself should rate highly whatever it produces. There should be correlation between output and review of same output.

bendergarciatoday at 4:05 PM

I wouldn’t put it past these tech companies to prefer ai outputs to encourage ai inputs