This is one of the real canaries I watch on "real AI" for programming.
It should be able to make an OS. It should be able to write drivers. It should be able to port code to new platforms. It should be able to transpile compiled binaries (which are just languages of a different language) across architectures.
Sure seems we are very far from that, but really these are breadth-based knowledge with extensive examples / training sources. It SHOULD be something LLMs are good at, not new/novel/deep/difficult problems. What I described are labor-intensive and complicated, but not "difficult".
And would any corporate AI allow that?
We should be pretty paranoid about centralized control attempts, especially in tech. This is a ... fragile ... time.
>It should be able to make an OS. It should be able to write drivers.
How is it going to do that without testing (and potentially bricking) hardware in real life?
>It should be able to transpile compiled binaries (which are just languages of a different language) across architectures
I don't know why you would use an LLM to do that. Couldn't you just distribute the binaries in some intermediate format, or decompile them to a comprehensible source format first?
[dead]
AI kicks ass at a lot of "routine reverse engineering" tasks already.
You can feed it assembly listings, or bytecode that the decompiler couldn't handle, and get back solid results.
And corporate AIs don't really have a fuck to give, at least not yet. You can sic Claude on obvious decompiler outputs, or a repo of questionable sources with a "VERY BIG CORPO - PROPRIETARY AND CONFIDENTIAL" in every single file, and it'll sift through it - no complaints, no questions asked. And if that data somehow circles back into the training eventually, then all the funnier.