I don't pay anyone for an image generator AI, because I can run an adequate image generator locally on my own personal computer.
My computer doesn't have enough RAM to run the state of the art in free LLMs, but such computers can be bought and are even affordable by any business and a lot of hobbyists.
Given this, the only way for model providers to stay ahead is to spend a lot on training ever better models to beat the free ones that are being given away. And buy "spend a lot" I mean they are making a loss.
This means that the similarity with the dot com bubble can be expressed with the phrase "losing money on every sale and making up for it in volume".
Hardware efficiency is also still improving; just as I can even run that image model locally on my phone, an LLM equivalent to SOTA today should run on a high-end smartphone in 2030.
Not much room to charge people for what runs on-device.
So, they are in a Red Queen's race, running as hard as they can just to stay where they are. And where they are today, is losing money.
You don't need RAM to run LLMs. Your graphics card does.
The best price for dollar/watt of electricity to run LLMs locally is currently apple gear.
I thought the same as you but I'm still able to run better and better models on a 3-4 year old Mac.
At the rate it's improving, even with the big models, people optimize their prompts so they run efficiently with tokens, and when they do.. guess what can run locally.
The dot com bubble didn't have comparable online sales. There were barely any users online lol. Very few ecommerce websites.
Let alone ones with credit card processing.
Internet users by year: https://www.visualcapitalist.com/visualized-the-growth-of-gl...
The ecommerce stats by year will interest you.