I don't see how we make the jump from current LLMs to AGI. May be it's my limited understanding of the research but current LLMs seem to not have any properties that indicate AGI. Would love to get thoughts from someone that understands it
I agree, I think two things are missing from current AI:
1. A model of the world itself (or whatever domain is under discussion). 2. A way to quickly learn and update in response to feedback.
These are probably related to an extent.
I think they are missing "I thought about that and have changed my mind" stuff. GPTs are pre-trained and don't change their weights after, whereas humans do. That seems to be one big part that is missing but could be built in the future.
what properties are you looking for?
Possible candidates we are missing: online learning, embodiment, self direction, long term memory and associated processing (compression etc), the ability to quickly think in tensor space.