logoalt Hacker News

sodafountanyesterday at 9:12 PM0 repliesview on HN

Hardware bugs can be documented for an LLM to learn from; it's really just a chicken-and-egg problem. There are plenty of open-source, working operating systems for LLMs to learn from as well.

And yes, I understand code re-use and distribution are valuable, and that's a good point. Having an LLM generate everything on the fly is definitely energy-intensive, but that hasn't stopped the world from building massive data centers to support it, regardless.

I guess the theory of my past few posts would be similar to rolling updates, so using the text editor as an example, you'd prompt the AI agent in the hypothetical OS to open a document, and it would generate a word processor on the fly, referencing the dozens of open source repos for word processors and pushing its own contributions back out into the world for reference by other LLMs - computationally expensive, yes. It would then learn from your behaviors, utilizing the program, and the next time you'd prompt the OS for a word-processor-like feature (I'm imagining an MS-DOS-like prompt), it would iterate on that existing idea or program - less computationally expensive because ideally the bulk of the work is already learned. Perhaps adding new features or key-bindings as it sees fit. I understand that hard-disk space is cheap, and you'd probably want some space to store personal files, but the OS could theoretically load your program directly into RAM once it's compiled from AI-generated source code. Removing the need to save programs themselves to disk.

Since LLMs are globally distributed, they're learning from all human interactions and are actively developing cutting-edge word processors tailored specifically to the end-users' needs. More of a VIM-style user? The LLM can pick up on that, prefer something more like MS Word? The LLM is learning that too. AIOS slowly becomes geared directly to you, the end-user.

That really has nothing to do with intelligence; you're just teaching a computer how to compute, which is what AI is all about.

Just some ideas on what the future might hold.