That is not necessarily true. That would be like arguing there is a finite number of improvements between the rockets of today and Star Trek ships. To get warp technology you can’t simply improve combustion engines, eventually you need to switch to something else.
That could also apply to LLMs, that there would be a hard wall that the current approach can’t breach.
If that's the case, then, what's the wall?
The "walls" that stopped AI decades ago stand no more. NLP and CSR were thought to be the "final bosses" of AI by many - until they fell to LLMs. There's no replacement.
The closest thing to a "hard wall" LLMs have is probably online learning? And even that isn't really a hard wall. Because LLMs are good at in-context learning, which does many of the same things, and can do things like set up fine-tuning runs on themselves using CLI.