Local LLMs are even more amazing in concept, all of the world's knowledge and someone to guide you through learning it without needing anything but electricity (and a hilariously expensive inference rig) to run it.
I would be surprised if in a decade we won't have local models that are an order of magnitude better than current cloud offerings while being smaller and faster, and affordable ASICs to run them. That'll be the first real challenger to the internet's current position as "the" place for everything. The more the web gets enshittified and commercialized and ad-ridden, the more people will flock to this sort of option.