logoalt Hacker News

forgetfreemantoday at 10:41 AM1 replyview on HN

You're reversing causality here. LLMs train on massive bodies of human-generated content. Constructs like the ones mentioned are an entirely unremarkable staple of long-form text content produced for audiences who are accustomed to consuming long-form text content.


Replies

mapttoday at 1:28 PM

The formula they have generalized their responses to in basic explainer mode is pretty distinctive for a lot of us who are otherwise used to reading long-form written pieces.