The poster is right. LLMs are Gish Gallop machines that produce convincing sounding output.
People have figured it out by now. Generative "AI" will fail, other forms may continue, though it it would be interesting to hear from experts in other fields how much fraud there is. There are tons of material science "AI" startups, it is hard to believe they all deliver.
>produce convincing sounding output
Well, correctness(though not only correctness) sounds convincing, the most convincing even, and ought to be information-theory-wise cheaper to generate than a fabrication, I think.
So if this assumption holds, the current tech might have some ceiling left if we just continue to pour resources down the hole.