The thing is SOTA has a plateau. All LLMs work on the same principle: input goes in for training, reinforced by humans. There is only so much input (all recorded human knowledge), only so many human tweaks, that can produce only so much increased signal-to-noise in output. The machine can't read your mind, and there is no one truthful answer to most questions, so there will always be a limit on how accurate or correct or whatever any response will get. So at some point, you just can't make a better response. The agent harness, prompts, etc, are the only way to get better, and that's gonna be open source.
Add to that the algorithmic improvements on inference that's making inference faster with more context and higher quality. TurboQuant is just one example, more methods are coming out all the time. So the inference is getting more efficient.
At the same time, hardware can kind of keep getting infinitely better. Even if you can't make it smaller, you can make it more energy efficient, improve multitasking, more GPU cores/RAM or iGPUs, pack in more chips, improve cooling, use new materials... the sky's the limit.
Add all 3 together and at some point you will get Opus 4.7 on a phone with 40 t/s. At that point there's no way I'm paying for inference on a server. You can do RAG on-device, and image/video/voice is done by multi-modals. I want my agent chats replicated, but that's Google Drive. I want the agent to search the web, but that's Google Search. So eventually we're back to just doing what we do today (pre-AI) only with more automation.
The really advanced shit will come in 10 years, when we finally crack real memory and learning. That will absolutely be locked up in the cloud. But that's not an LLM, it's something else entirely. (slight caveat that WW3 will delay progress by 10-20 years)