logoalt Hacker News

jnovektoday at 2:21 PM3 repliesview on HN

> Able to review the code output of coding agents

That probably won’t be necessary in a few years.


Replies

circlefavshapetoday at 2:29 PM

It's necessary for devs right now, no matter how good they are, and it's those devs' code the models are trained on

show 1 reply
rafterydjtoday at 2:30 PM

I see this line of thought put out there many times, and I've been thinking: why do people do anything at all? What's the point? If no one at all is even reviewing the output of coding agents, genuinely, what are we doing as a society?

I fail to see how we transition society into a positive future without supplying means of verifying systemic integrity. There is a reason that Upton Sinclair became famous: wayward incentives behind closed doors generally cause subpar standards, which cause subpar results. If the FDA didn't exist, or they didn't "review the output", society would be materially worse off. If the whole pitch for AI ends with "and no one will even need to check anything" I find that highly convenient for the AI industry.

show 2 replies
falkensmaizetoday at 2:43 PM

They will still be turning out the same problematic code in a few years that they do now, because they aren’t intelligent and won’t be intelligent unless there is a fundamental paradigm shift in how an LLM works.

I use LLMs with best practices to program professionally in an enterprise every day, and even Opus 4.6 still consistently makes some of the dumbest architectural decisions, even with full context, complete access to the codebase and me asking very specific questions that should point it in the right direction.

show 1 reply