This article uses the computational complexity hammer way too hard, discounts huge progress in every field of AI outside of the hot trend of transformers and LLMs. Nobody is saying the future of AI is autoregressive and this article pretty much ignores any of the research that has been posted here around diffusion based text generation or how it can be combined with autoregressive methods… discounts multi-modal models entirely. He also pretty much discounts everything that’s happened with AlphaFold, Alpha Go etc. reinforcement learning etc.
The argument that computational complexity has something to do with this could have merit but the article certainly doesn’t give indication as to why. Is the brain NP complete? Maybe maybe not. I could see many arguments about why modern research will fail to create AGI but just hand waving “reality is NP-hard” is not enough.
The fact is: something fundamental has changed that enables a computer to pretty effectively understand natural language. That’s a discovery on the scale of the internet or google search and shouldn’t be discounted… and usage proves it. In 2 years there is a platform with billions of users. On top of that huge fields of new research are making leaps and bounds with novel methods utilizing AI for chemistry, computational geometry, biology etc.
It’s a paradigm shift.
> The argument that computational complexity has something to do with this could have merit but the article certainly doesn’t give indication as to why.
OP says it is because that predicting the next token can be correct or not, but it always looks plausible because that is what it calculates. Therefore it is dangerous and can not be fixed because it is how it works in essence.
Technologically, I believe that you're right. On the other hands, the previous AI winters happened despite novel, useful technologies, some of which proved extremely useful and actually changed the world of software. They happened because of overhype, then investor moving on to the next opportunity.
Here, the investors are investing in LLMs. Not in AlphaFold, AlphaGo, neurosymbolic, focus learning, etc. If (when) LLMs prove insufficient to the insane level of hype and if (when) experience shows that there is only so much money that you can make with LLMs, it's possible that the money will move on to other types of AI, but there are chances that it will actually go to something entirely different, perhaps quantum, leaving AI in winter.
> that enables a computer to pretty effectively understand natural language
I'd argue that it pretty effectively mimics natural language. I don't think it really understands anything, it is just the best madlibs generator that the world has ever seen.
For many tasks, this is accurate 99+% of the time, and the failure cases may not matter. Most humans don't perform any better, and arguably regurgitate words without understanding as well.
But if the failure cases matter, then there is no actual understanding and the language the model is generating isn't ever getting "marked to market/reality" because there's no mental world model to check against. That isn't going to be usable if there are real-world consequences of the LLM getting things wrong, and they can wind up making very basic mistakes that humans wouldn't make--because we can innately understand how the world works and aren't always just stringing words together that sound good.
I don't think anybody expects AI development to stop. A winter is defined by a relative drying-up of investment and, importantly, it's almost certain that any winter will eventually be followed by another summer.
The pace of investment in the last 2 years has been so insane that even Altman has claimed that it's a bubble.
> I could see many arguments about why modern research will fail to create AGI
Why is AGI even necessary? If the loop between teaching the AI something, and it being able to repeat similar enough tasks; if that loop becomes short enough, days or hours instead of months, who cares if some ill-defined bar of AGI is met?
> something fundamental has changed that enables a computer to pretty effectively understand natural language.
You understand how the tech works right? It's statistics and tokens. The computer understands nothing. Creating "understanding" would be a breakthrough.
Edit: I wasn't trying to be a jerk. I sincerely wasn't. I don't "understand" how LLMs "understand" anything. I'd be super pumped to learn that bit. I don't have an agenda.
GOFAI was also a paradigm shift, regardless of that winter. For example, banks started automating assessments of creditworthiness.
What we didn't get was what had been expected, namely things like expert systems that were actual experts, so called 'general intelligence' and war waged through 'blackboard systems'.
We've had voice controlled electronics for a long time. On the other hand, machine vision applications have improved massively in certain niches, and also allowed for new forms of intense tyranny and surveillance where errors are actually considered a feature rather than a bug since they erode civil liberties and human rights but are still broadly accepted because 'computer says'.
While you could likely argue "leaps and bounds with novel methods utilizing AI for chemistry, computational geometry, biology etc." by downplaying the first part or clarifying that it is mainly an expectation, I think most people are going to, for the foreseeable future, keep seeing "AI" as more or less synonymous with synthetic infantile chatbot personalities that substitute for human contact.
I agree with everything you wrote, the technology is unbelievable and 6 years ago, maybe even 3.1 years would have been considered magic.
A steel man argument for why winter might be coming is all the dumb stuff companies are pushing AI for. On one hand (and I believe this) we argue it’s the most consequential technology in generations. On the other, everybody is using it for nonsense like helping you write an email that makes you sound like an empty suit, or providing a summary you didn’t ask for.
There’s still a ton of product work to cross whatever that valleys called between concept and product, and if that doesn’t happen, money is going to start disappearing. The valuation isn’t justified by the dumb stuff we do with it, it needs PMF.