>It should be able to make an OS. It should be able to write drivers.
How is it going to do that without testing (and potentially bricking) hardware in real life?
>It should be able to transpile compiled binaries (which are just languages of a different language) across architectures
I don't know why you would use an LLM to do that. Couldn't you just distribute the binaries in some intermediate format, or decompile them to a comprehensible source format first?
I agree that it's a challenging problem.
My line of thinking is that AI essentially is really good at breadth-based problems wide knowledge.
An operating system is a specific well-known set of problems. Generally, it's not novel technology involved. An OS is a massive amount of work. Technical butrudgerous work.
If there's a large amount of source code, a great deal of discussion on that source code, and lots of other working examples, and you're really just kind of doing a derivative n + 1 design or adaptation of an existing product, that sounds like something in llm can do
Obviously I'm not talking about vibe, coding and OS. But could an OS do 99% of that and vastly reduce the amount of work to get a OS to work with your hardware with the big assumption that you have access to specs or some way of doing that?