Any sufficiently complex LLM is indistinguishable from AGI
If we take that statement as fact then I don't believe we are even close to an LLM being sufficiently complex enough.
However, I don't think it is even true. LLMs may not even be on the right track to achieving AGI and without starting from scratch down an alternate path it may never happen.
LLMs to me seem like a complicated database lookup. Storage and retrieval of information is just a single piece of intelligence. There must be more to intelligence than a statistical model of the probable next piece of data. Where is the self learning without intervention by a human. Where is the output that wasn't asked for?
At any rate. No amount of hype is going to get me to believe AGI is going to happen soon. I'll believe it when I see it.
Some might be missing the reference: https://en.wikipedia.org/wiki/Clarke's_three_laws
> Any sufficiently complex LLM is indistinguishable from AGI
Isn't this tautology? We've de facto defined AGI as a "sufficiently complex LLM."