If that's the case, then, what's the wall?
The "walls" that stopped AI decades ago stand no more. NLP and CSR were thought to be the "final bosses" of AI by many - until they fell to LLMs. There's no replacement.
The closest thing to a "hard wall" LLMs have is probably online learning? And even that isn't really a hard wall. Because LLMs are good at in-context learning, which does many of the same things, and can do things like set up fine-tuning runs on themselves using CLI.
Agree completely with your position.
I do think though that lack of online learning is a bigger drawback than a lot of people believe, because it can often be hidden/obfuscated by training for the benchmarks, basically.
This becomes very visible when you compare performance on more specialized tasks that LLMs were not trained for specifically, e.g. playing games like Pokemon or Factorio: General purpose LLMs are lagging behind a lot in those compared to humans.
But it's only a matter of time until we solve this IMO.
The wall is training data. Yes, we can make more and more of post training examples. No, we can never make enough. And there are diminishing returns to that process.
> If that's the case, then, what's the wall?
I didn’t say that is the case, I said it could be. Do you understand the difference?
And if it is the case, it doesn’t immediately follow that we would know right now what exactly the wall would be. Often you have to hit it first. There are quite a few possible candidates.
Hallucinations are IMO a hard wall. They have gotten slightly better over the years but you still get random results that may or may not be true, or rather, are in a range between 0-100% true, depending on which part of the answer you look at.