logoalt Hacker News

tibbaryesterday at 6:13 PM1 replyview on HN

the problem with LLM code review is that it's good at checking local consistency and minor bugs, but it generally can't tell you if you are solving the wrong problem or if your approach is a bad one for non-technical reasons.

This is an enormous drawback and makes LLM code review more akin to a linter at the moment.


Replies

menaerusyesterday at 6:23 PM

I mean if the model can reason about making the changes on the large-scale repository then this implies it can also reason about the change somebody else did, no? I kinda agree and disagree with you at the same time, which is why I said most of the engineers but I believe we are heading towards the model being able to completely autonomously write and review its own changes.

show 1 reply