> Many people say we’re at AGI already and I’m wondering why everyone hasn’t died yet.
That’s like saying “many people say the Earth is flat and I’m wondering why anyone hasn’t fallen off the edge yet”.
“Many people say” doesn’t translate to reality. Maybe AGI will kill us all, maybe it won’t (I think we’re doing a fine job of that ourselves, no need for a machine’s help), but we’re definitely not at AGI, except in the minds of a few deluded people (or scammers).
We are already at AGI. I don’t know how you can argue that LLMs don’t meet the definition of general artificial intelligence, as opposed to narrow AI like chess engines, image classifiers, AlphaGo or self driving cars, which are trained with one objective and cannot even possibly be applied to any other task.
People have just moved the goalposts, imagine explaining Opus 4.6’s capabilities to someone even 10 years ago, it would definitely have been called AGI.
I highly doubt there will be a point where everyone will agree that we’ve achieved ASI, there will always be a Gary Marcus type finding some edge case where it performs poorly.