I don't know why you think that's the case regarding text models. If that was the case, there would be articles on here that are just created by only generative AI and nobody would know the difference. It's pretty obvious that's not happening yet, not the least of which because I know what kinds of slop state-of-the-art generative models still produce when you give them open-ended prompts.
Ironic how this comment exemplifies the issue - broad claims about "slop" output but no specific examples or engagement with current architectures. Real discussions here usually reference benchmarks or implementation details.
(from Claude)