How about we use all that AI and start doing some serious optimizations to existing software? Reduce memory requirements by half, or even more.
Improving LLM memory contention will allow LLMs to use more memory.
We are.
I'm writing a metric ton of Rust code with Claude Code.
LLMs are intrinsically deaigned for token production, which is typically inversely related to optimization and efficoency.
Plenty of people do.
AI is one of the few major general technological breakthroughs, comparable to the Internet and electricity. It's potentially applicable to everything, which is why right now everyone is trying to apply it to everything. Including developing new optimization algorithms, optimizing optimizing compilers, optimizing applications, optimizing systems, optimizing hardware, ...
Big AI vendors are at the forefront of it, because they're the ones who actually pay for the AI revolution, so any efficiency improvement saves them money.