All this assumes that LLMs are the sole mechanism for AI and will remain so forever: no novel architectures (neither hardware nor software), no progress in AI theory, nothing better than LLMs, simply brute force LLM computation ad infinitum.
Perhaps the assumptions are true. The mere presence of LLMs seems to have lowered the IQ of the Internet drastically, sopping up financial investors and resources that might otherwise be put to better use.
That's incorrect. TPUs can support many ML workloads, they're not exclusive to LLMs.