logoalt Hacker News

herrkanintoday at 2:38 PM3 repliesview on HN

Your argument is just as applicable on human code reviewers. Obviously having others review the code will catch issues you would never have thought of. This includes agents as well.


Replies

kneel25today at 3:09 PM

They’re not equal. Humans are capable of actually understanding and looking ahead at consequences of decisions made, whereas an LLM can’t. One is a review, one is mimicking the result of a hypothetical review without any of the actual reasoning. (And prompting itself in a loop is not real reasoning)

show 1 reply
Fervicustoday at 2:57 PM

With humans though, I wouldn't have to review 20k lines of code at once.

show 1 reply
DetroitThrowtoday at 2:46 PM

>Your argument is just as applicable on human code reviewers.

The tests many of us use for how capable a model or harness is is usually based around whether they can spot logical errors readily visible to humans.

Hence: https://news.ycombinator.com/item?id=47031580