Has anyone ever presented any solid theoretical reason we should expect language models to yield general intelligence?
So far as I have seen, people have run straight from "wow, these language models are more useful than we expected and there are probably lots more applications waiting for us" to "the AI problem is solved and the apocalypse is around the corner" with no explanation for how, in practical terms, that is actually supposed to happen.
It seems far more likely to me that the advances will pause, the gains will be consolidated, time will pass, and future breakthroughs will be required.
I don't think the reasoning models are LLMs. they have LLMs as a component but they have another layer that learned(reinforcement learning) how to prompt the LLMs(for lack of a better way to describe it)
Not with current architecture
The degree to which "AGI" appears to be a quasi-religious fixation cannot be overstated. At the extreme end you have the likes of the stranger Less Wrong crowd, the Zizians, and frankly some people here. Even when you withdraw from those extremes though, there's a tremendous amount of intellectualizing of what appears to be primarily a set of hopes and fears.
Well, that and it turns out that for a LOT of people "it talks like me" creates an inescapable impression that "It is thinking, and it's thinking like me". Issues such as the absolutely hysterical power and water demands, the need for billions worth of GPU's... these are ignored or minimized.
Then again we already have a model for this fervor, Cryptocurrency and "The Blockchain" creates a similar kind of money-fueled hysteria. People here would laugh in your face if you suggested that soon everything imaginable wouldn't simply run "on the chain". It was "obvious" that "fiat" was on the way out, that only crypto represented true freedom.
tl;dr The line between the hucksters and their victims really blurs when social media is involved, and hovering around all of this are a smaller group of True Believers who really think they're building God.
100% - there has not been any solid theoretical argument whatsoever (beyond some confusions about scaling that we can now see were incorrect).