You can imagine all you want, but my understanding is there is no credible evidence that scaling LLMs will result in true AGI.
Obviously there's no "evidence". Why would you even think we need AGI? But I'm happy to hear your reasoning if you were one of the few/only? people who imagined that software that could predict the next word could do what it now is doing.
Obviously there's no "evidence". Why would you even think we need AGI? But I'm happy to hear your reasoning if you were one of the few/only? people who imagined that software that could predict the next word could do what it now is doing.