>Maybe they really are all wrong
All? Quite a few of the best minds in the field, like Yann LeCun for example, have been adamant that 1) autoregressive LLMs are NOT the path to AGI and 2) that AGI is very likely NOT just a couple of years away.
I'm inclined to agree with Yann about true AGI, but he works at Meta and they seem to think current LLM's are sufficiently useful to be dumping preposterous amounts of money at them as well.
It may be a distinction thats not worth making if the current approach is good enough to completely transform society and make infinite money
It's obviously not taken to mean literally everybody.
Whatever LeCun says and really even he has said "AGI is possible in 5 to 10 years" as recently as 2 months ago (so if that's the 'skeptic' opinion, you can only imagine what a lot of people are thinking), Meta has and is pouring a whole lot of money into LLM development. "Put your money where your mouth is" as they say. People can say all sorts of things but what they choose to focus their money on tells a whole lot.
Who says they will stick to autoregressive LLMs?
You have hit on something that really bothers me about recent AGI discourse. It’s common to claim that “all” researchers agree that AGI is imminent, and yet when you dive into these claims “all” is a subset of researchers that excludes everyone in academia, people like Yann, and others.
So the statement becomes tautological “all researchers who believe that AGI is imminent believe that AGI is imminent”.
And of course, OpenAI and the other labs don’t perform actual science any longer (if science requires some sort of public sharing of information), so they win every disagreement by claiming that if you could only see what they have behind closed doors, you’d become a true believer.