You don't need RAM to run LLMs. Your graphics card does.
The best price for dollar/watt of electricity to run LLMs locally is currently apple gear.
I thought the same as you but I'm still able to run better and better models on a 3-4 year old Mac.
At the rate it's improving, even with the big models, people optimize their prompts so they run efficiently with tokens, and when they do.. guess what can run locally.
The dot com bubble didn't have comparable online sales. There were barely any users online lol. Very few ecommerce websites.
Let alone ones with credit card processing.
Internet users by year: https://www.visualcapitalist.com/visualized-the-growth-of-gl...
The ecommerce stats by year will interest you.