If you want a model, here's one: LLMs have never demonstrated the ability to go obviously beyond interpolating their training data. It takes an army of paid data producers solving homework problems to give ChatGPT the ability to do your homework. All vibecoded apps that turned out to be successful could put on a geological soil chart with other apps, probably on GitHub somewhere, on the corners. The prediction? They won't.
In this model, the exponential growth that everybody is freaking out about is only the realization of the modular software dream ("we'll only have to write an ORM once for all of human history!") and the sheer amount of knowledge in libraries.
It's at least falsifiable.
Just to play devils advocate: are we sure humans have demonstrated the ability to go beyond their training data? Like.. are we sure-sure about that?