I don't know if AGI will emerge from LLM, but I'm always reminded of the Chinese room thought experiment. With billions thrown at the idea it will certainly be the ultimate answer as to whether true understanding can emerge from a large enough dictionary.
Please stop refering to the Chinese Room - it's just magical/deist thinking in disguise. It postulates that humans have way of 'understanding' things that is impossible to replicate mechanically.
The fact that philosophy hasn't recognized and rejected this argument based on this speaks volumes of the quality of arguments accepted there.
(That doesn't mean LLMs are or will be AGI, its just this argument is tautological and meaningless)