Perhaps, just perhaps, LLMs are just statistical models that literally can't create novel things, therefore any structure LLMs write was learnt from human writing?
What kind of human writing has "it's not X—it's Y" in every single paragraph?
The answer is none. LLMs haven't accurately modeled human writing for years, current models have been smacked on the head with the coding RLHF bat so much, they all write distinctly inhuman text.
What kind of human writing has "it's not X—it's Y" in every single paragraph?
The answer is none. LLMs haven't accurately modeled human writing for years, current models have been smacked on the head with the coding RLHF bat so much, they all write distinctly inhuman text.