logoalt Hacker News

CamperBob2yesterday at 10:09 PM2 repliesview on HN

Not really. Got some code you don't understand? Feed it to a model and ask it to add comments.

Ultimately humans will never need to look at most AI-generated code, any more than we have to look at the machine language emitted by a C compiler. We're a long way from that state of affairs -- as anyone who struggled with code-generation bugs in the first few generations of compilers will agree -- but we'll get there.


Replies

inspectorwadgetyesterday at 10:43 PM

>any more than we have to look at the machine language emitted by a C compiler.

Some developers do actually look at the output of C compilers, and some of them even spend a lot of time criticizing that output by a specific compiler (even writing long blog posts about it). The C language has an ISO specification, and if a compiler does not conform to that specification, it is considered a bug in that compiler.

You can even go to godbolt.org / compilerexplorer.org and see the output generated for different targets by different compilers for different languages. It is a popular tool, also for language development.

I do not know what prompt engineering will look like in the future, but without AGI, I remain skeptical about verification of different kinds of code not being required in at least a sizable proportion of cases. That does not exclude usefulness of course: for instance, if you have a case where verification is not needed; or verification in a specific case can be done efficiently and robustly by a relevant expert; or some smart method for verification in some cases, like a case where a few primitive tests are sufficient.

But I have no experience with LLMs or prompt engineering.

I do, however, sympathize with not wanting to deal with paying programmers. Most are likely nice, but for instance a few may be costly, or less than honest, or less than competent, etc. But while I think it is fine to explore LLMs and invest a lot into seeing what might come of them, I would not personally bet everything on them, neither in the short term nor the long term.

May I ask what your professional background and experience is?

show 1 reply
rvzyesterday at 10:25 PM

> Not really. Got some code you don't understand? Feed it to a model and ask it to add comments.

Absolutely not.

An experienced individual in their field can tell if the AI made a mistake in the comments / code rather than the typical untrained eye.

So no, actually read the code and understand what it does.

> Ultimately humans will never need to look at most AI-generated code, any more than we have to look at the machine language emitted by a C compiler.

So for safety critical systems, one should not look or check if code has been AI generated?

show 1 reply