This is how I see LLMs as well.
The main problem with the article is that it is meandering around in ill-conceived concepts, like thinking, smart, intelligence, understanding... Even AI. What they mean to the author is not what they mean to me, and still different to they mean to the other readers. There are all these comments from different people throughout the article, all having their own thoughts on those concepts. No wonder it all seem so confusing.
It will be interesting when the dust settles, and a clear picture of LLMs can emerge that all can agree upon. Maybe it can even help us define some of those ill-defined concepts.
I think the consensus in the future will be that LLMs were, after all, stochastic parrots.
The difference with what we think today is that in the future we'll have a new definition of stochastic parrots, a recognition that stochastic parrots can actually be very convincing and extremely useful, and that they exhibit intelligence-like capabilities that seemed unattainable by any technology up to that point, but LLMs were not a "way forward" for attaining AGI. They will plateau as far as AGI metrics go. These metrics keep advancing to stay ahead of LLM, like a Achilles and the Turtle. But LLMs will keep improving as tooling around it becomes more sophisticated and integrated, and architecture evolves.