The LLM architectures we have now have reached their full potential already, so going further would require something completely different. It isn’t a matter of refining the existing tech, whereas the internet of 1997 is virtually technologically identical to what we have today. The real change has been sociological, not technological.
To make a car analogy; the current LLMs are not the early cars, but the most refined horse drawn carriages. No matter how much money is poured into them, you won’t find the future there.
Dial-up modems reached their full 56kbps potential in 1997, and going further required something completely different. It happened naturally to satisfy demand, and was done by many of the same companies and people; the change was technological, not sociological.
I think we're probably still far from the full potential of LLMs, but I don't see any obstacles to developing and switching to something better.
> The LLM architectures we have now have reached their full potential already.
How do we know that?
You could see some potential modifications. Already some are multimodal. You'd probably want something to change the weights as time goes on so they can learn. It might be more steam engines needing to be converted to petrol engines.
The current generation of LLM's have convinced me that we already have the compute and the data needed for AGI, we just likely need a new architecture. But I really think such an architecture could be right around the corner. It appears to me like the building blocks are there for it, it would just take someone with the right luck and genius to make it happen.