logoalt Hacker News

marcus_holmestoday at 2:38 AM1 replyview on HN

I think the author answers their own question at the end.

The first 3/4 of the article is "we must be responsible for every line of code in the application, so having the LLM write it is not helping".

The last 1/4 is "we had an urgent problem so we got the LLM to look at the code base and find the solution".

The situation we're moving to is that the LLM owns the code. We don't look at the code. We tell the LLM what is needed, and it writes the code. If there's a bug, we tell the LLM what the bug is, and the LLM fixes it. We're not responsible for every line of code in the application.

It's exactly the same as with a compiler. We don't look at the machine code that the compiler produces. We tell the compiler what we want, using a higher-level abstraction, and the compiler turns that into machine code. We trust compilers to do this error-free, because 50+ years of practice has proven to us that they do this error-free.

We're maybe ~1 year into coding agents. It's not surprising that we don't trust LLMs yet. But we will.

And it's going to be fascinating how this changes the Computer Science. We have interpreted languages because compilers got so good. Presumably we'll get to non-human-readable languages that only LLMs can use. And methods of defining systems to an LLM that are better than plain English.


Replies

johnbendertoday at 2:47 AM

Compilers don’t do this error free of course BUT if we want them too we can say what it means for a compiler to be correct very directly _one time_ and have it be done for all programs (see the definition for simulation in the CompCert compiler). This is a major and meaningful difference from AI which would need such a specification for each individual application you ask it to build because there is no general specification for correct translation from English to Code.

show 1 reply